Skip to Main Content

The wholly synthetic mRNA vaccines for Covid-19 saved nearly 20 million lives in just their first year of use, according to data published in 2022 by The Lancet. That success stands as the most prominent example of the power of synthetic biology, a field whose possibilities have excited me since I first heard the term more than 15 years ago. As scientists gain increasing dexterity in manipulating the basic elements of life, they are designing not only other synthetic vaccines, but also therapies for cancer, benign alternatives to fossil fuels, and even novel approaches to protecting endangered species.

Some of these efforts will succeed, and some surely will fail. But it is hardly hype to call what is happening in labs around the world a revolution in biology.

advertisement

Technological advances do not come without a price, though. And as a rule, the greater the advance, the greater the risk that accompanies it. In a world where biology moves at the speed of light, those risks can get out of hand. The impact — whether accidental or deliberate — could be far more ruinous than any threat we have yet imagined. Imagine the effects if a virus more deadly than even SARS-CoV-2 were to spread throughout the world.

“If you simulate the airburst detonation of the largest currently operational intercontinental ballistic missile, over a large city — New York, Beijing, Moscow, London — the estimated casualty count is about 3 million. SARS-CoV-2 has killed many more people than that,’’ Kevin Esvelt told me not long ago. Esvelt runs the Sculpting Evolution group at the MIT Media Lab (and for full disclosure, I teach a course there with him called Safeguarding the Future).

We have all heard that somewhere in some corner of the internet, you could learn how to make a bomb, even a nuclear weapon, if you had the vast resources required to do so. But there are many laws and treaties designed to prevent people from succeeding. It’s hazardous information.

advertisement

There are almost no such rules when it comes to publishing the fundamental blueprints needed to make or alter a virus with the tools of synthetic biology. There are already thousands of people who could use readily available reverse genetics protocols, a genome sequence, and synthetic DNA to produce an infectious virus. As the price of computer power and synthetic DNA falls, the number of people capable of working with those blueprints can only grow.

By blueprints, I mean the genetic code — the DNA sequence that makes smallpox or polio or SARS-CoV-2. Those sequences are freely available on the internet. The sequences for the devastating influenza viruses that struck in 1889, 1918, 1957, and 1968 are also easy to find. But even more deadly viruses — Marburg, Lassa, and Ebola — are all out there, too.

In 1998, an international team of scientists retrieved the 1918 influenza virus, which killed as many as 50 million people, from six frozen corpses in the Arctic. The samples from those infected bodies were considered so dangerous that only a single scientist was permitted to work with them, and only in a highly secure laboratory at the Centers for Disease Control and Prevention.

But what about the sequence of the genome found in those corpses? Science magazine published it in the issue dated Oct. 7, 2005. Granted, that was before biology had made the transition to the digital world. And scientists still make serious attempts to control access to dangerous viruses in laboratories.

When it comes to the information needed to make them, however, we exercise almost no such caution. Publishing the sequences of biological discoveries is not just encouraged. In the academic and industrial systems we have today, it is expected.

Openness has always been a signature element of scientific discovery. Ideas are supposed to spread. And in a democracy, we embrace the free speech model of science. Researchers publish tens of thousands of genetic sequences every year. The vast majority pose no harm. The wide dissemination of the SARS-CoV-2 virus sequence helped scientists create the revolutionary mRNA vaccines in record time, offering just one example of the benefits of open scientific exchange.

But it wouldn’t take millions of sequences to produce a deadly biological weapon. Making the sequence of just one dangerous virus public would create an information hazard. And the possibility of that happening is growing faster than any virus that only infects the body.

This whole argument about what information we should make available to whom and when is difficult for me. I have been a journalist for 40 years. When journalists discover information, we publish it. We don’t let people tell us what not to write. Those principles can be messy, and sometimes we get it wrong. But in my experience, efforts to rein in the press are almost always self-serving and dangerous. Without the freedom to publish, the world is almost always a weaker place.

Analogies go only so far, though. Viral sequences can move around the internet just as easily as the words you are reading now. But viruses are not words. They are not even bombs. A bomb causes a single catastrophe. Anything that replicates and multiplies exponentially, the way a virus does, can be worse, particularly if it is engineered to be worse.

Many politicians, scientists, and public health officials seem to be fixated on how the SARS-CoV-2 virus made its way into humans. I don’t want to suggest it doesn’t matter. Of course, it matters. And the virus could have accidentally escaped from a lab. After all, leaks from laboratories are common — the last person known to have died of smallpox was exposed through such a leak. Yet, despite what the Department of Energy has concluded with “low confidence,’’ most data suggest the virus emerged from the wild just as SARS did, and MERS and Ebola and HIV, and so many others.

Viruses don’t care where they came from. They are just looking for a place to live. Instead of wasting so much anger and energy on this debate, wouldn’t it make sense to focus more heavily on whether we should be doing these experiments at all? That has not been the way science operated in the past — but biology has essentially become digital information. And we need to take a closer look at what that means.

It is long past time to acknowledge that some experiments simply should not be carried out. It’s also long past time to introduce ways to protect humanity from those that go awry — either naturally or through an effort to cause harm.

It is hard to imagine that a species that could figure out how to write, print and alter DNA cannot figure out a reasonable approach to regulating it. Scientists have suggested several approaches to monitoring potentially dangerous findings and managing them more intelligently. For instance, we could register synthetic biologists. Nobody can drive without a license; maybe you should also need some kind of legal permission to have the power to create a deadly virus. (That would be a hard rule to enforce globally; that doesn’t mean we shouldn’t try.) There are already efforts to screen for dangerous sequences that might be ordered online, but they need to be expanded. It is also possible to put “watermarks” in DNA sold commercially, something akin to the watermarks you find in American currency. Currently, researchers are trying to determine if certain bands of ultraviolet light might kill pathogens without harming us — or any other species.

We also need to sequence wastewater more regularly to see if something new is multiplying exponentially. If so, we would have to act quickly to shut it down. None of these ideas would suddenly make it possible to ignore the risks presented by pathogens, but combined, they would surely make it harder for someone to do serious damage intentionally or unintentionally.

So far, the record is not promising. In 2021, USAID invested $125 million in a program that essentially asks experts to prospect for pandemic viruses, the kind that might spill over from animals. The program has a somewhat unwieldy name: “Discovery & Exploration of Emerging Pathogens – Viral Zoonoses,” or “Deep VZN” for short. It also basically suggests that scientists take those potentially lethal viruses back to the laboratory and run experimental tests on all the components.

The implication is that you can only make a vaccine to defeat a virus if you have worked on the virus first in the lab. That reasoning might have made sense before vaccines could be produced practically on demand. But in an age of synthetic mRNA technology, it almost certainly is not required in the same way.

And what would happen with the sequences of those newly prospected viruses?

Like all the others, they will all be available, too. You will just have to check the internet. Or ask an indiscreet chatbot. Hard as it might be to believe, it is as if nobody has considered the possibility that this information could be used to hurt people, rather than to help them.

Michael Specter is a staff writer at The New Yorker and a visiting scholar at the MIT Media Lab. His audiobook, Higher Animals: Vaccines, Synthetic Biology and the Future of Life, has just been released by Pushkin Industries.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.