After three bleak years, the coronavirus pandemic is finally drawing to a close, but pandemics as a general threat very much are not. At the moment, the most pressing concern is H5N1, better known as bird flu. Public-health experts have worried for decades about the virus’s potential to spark a pandemic, and the current strain has been devastating global bird populations–not to mention spilling over into assorted mammalian hosts–for more than a year. But those worries became even more urgent in mid-October, when an outbreak of the virus on a Spanish mink farm seemed to show that the mink were not only contracting the disease but transmitting it.
Occasional spillover from birds to mammals is one thing; transmission among mammals–especially those whose respiratory tracts resemble humans’ as closely as minks’ do–is another. “I’m actually quite concerned about it,” Richard Webby, an influenza expert at St. Jude Children’s Research Hospital, told me. “The situation we’re in with H5 now, we have not been in before, in terms of how widespread it is, in terms of the different hosts it’s infecting.”
What’s hard to gauge at this point, Webby said, is the threat this spread poses to humans. It will be dependent on whether or not we are able to transmit the virus between us and how efficient. So far the risk is low, but that could change as the virus continues to spread and mutate. And the type of research some scientists believe could help us get to grips with the threat is… to be blunt, dangerous. These scientists believe it is essential to safeguard us against future pandemics. Others think it risks nothing less than the complete annihilation of humanity. The type of pandemic that you are most concerned about will determine which way you feel.
The point of the research in question is to better understand pandemic threats by enhancing deadly viruses. It is carried out in highly controlled laboratories with the goal of better preparedness for the spread of these deadly viruses around the globe. Gain-of-function research is often used to describe this type of science. If that term sounds familiar, that’s probably because it has been invoked constantly in debates about the origins of COVID-19, to the point that it has become hopelessly politicized. On the right, gain-of-function is now a dirty word, inextricable from suspicions that the pandemic began with a lab leak at China’s Wuhan Institute of Virology, where researchers were doing experiments of just this sort. The evidence is strong in favor of the alternative theory that coronavirus spread to humans from animals. But a lab leak hasn’t totally been ruled out.)
As a category, gain-of-function actually encompasses a far broader range of research. Any experiment that genetically alters an organism so that it does something it didn’t before–that is, gains a function–is technically gain-of-function research. This category includes many experiments that produce drugs and antibiotics. The problem people worry about is the possible spread of pathogens to the rest of the world. It’s easy to understand why some people are uncomfortable with this type of work.
Despite the inherent risks, some virologists told me, this sort of research is crucial to preventing future pandemics. “If we know our enemy, we can prepare defenses,” the Emory University virologist Anice Lowen told me. The research enables us to pinpoint the specific molecular changes that allow a virus capable of spreading among animals to spread among humans; our viral surveillance efforts can then be targeted to those adaptations in the wild. It is possible to develop antivirals and vaccines quickly and predict how the virus will evolve in order to avoid these defenses.
This is not merely hypothetical: In the context of the bird-flu outbreak, Lowen said, we could perform gain-of-function experiments to establish whether the adaptations that have allowed the virus to spread among mink enhance its ability to infect human cells. In fact, back in 2011, two scientists separately undertook just this sort of research, adapting H5N1 to spread among ferrets, whose respiratory tract closely resembles our own. The research demonstrated that bird flu could not only spill over into mammalian hosts but, under the proper circumstances, pass between mammalian hosts–just as it now seems to be doing–and perhaps even human ones.
The backlash to this research was swift and furious. Critics–and there were many–charged that such experiments were as likely to start a pandemic as to prevent one. Top flu researchers put a voluntary moratorium on their work, and the National Institutes of Health later enacted a funding moratorium of its own. With the imposition of stricter oversight regimens, both eventually lifted and the furor subsided, but researchers did not race to follow up on the two initial studies. Those follow-ups could have given us a better sense of the likelihood that H5N1 might develop the ability to transmit between humans, Angela Rasmussen, a virologist at the Vaccine and Infectious Disease Organization in Saskatchewan, Canada, told me. She said that the moratorium had a chilling effect. “We haven’t really been looking at the determinants of mammal-to-mammal transmission, and that’s the thing that we really need in order to understand what the risk is to the human population.”
The controversy over the origins of the coronavirus pandemic has renewed calls for gain-of-function bans–or at least additional oversight. Last month, the National Science Advisory Board for Biosecurity delivered a set of recommendations that, if adopted, would tighten regulations on all sorts of virology research. Piled on top of more widespread anti-science rhetoric throughout the pandemic, the recommendations have contributed to a sense of embattlement among virologists. According to Seema Lakdawala (flu-transmission specialist at Emory University), the advisory board “responds to a lot if hyperbole and without real evidence.”
But other researchers–those more on the “fear the annihilation of humanity” side of the aisle–see the recommendations as progress, important if not anywhere near sufficient. “Laboratory accidents happen,” Richard Ebright, a molecular biologist at Rutgers University who earlier this month co-founded an anti-gain-of-function biosecurity nonprofit, told me. “They actually happen on a remarkably frequent basis.” The 1977 flu pandemic, which killed roughly 700,000 people, may well have started in a laboratory. Anthrax, smallpox, and other influenza strains have all leaked, sometimes with deadly consequences. The original SARS virus has done so too–several times, in fact, since its natural emergence in 2003. exactly how the coronavirus epidemic began might not be known HTML1. (This lack of definitive evidence has not stopped Ebright from tweeting that NIH officials potentially share the blame for the millions of deaths COVID-19 has caused worldwide. )
To researchers in this camp, the benefits of experimenting with such dangerous pathogens simply do not justify the risks. Kevin Esvelt at MIT said that while it might seem marginally helpful to surveillance efforts to identify mutations that can lead to human transmissibility, the likelihood of us finding one is very slim. In the case of bird flu, researchers told me, the mutations that appear to have made the virus transmissible among mink are not the mutations identified in the 2011 studies. This sort of research, Ebright said, “has existential risk, but effectively no benefit or extremely marginal benefit.”
Huge numbers of lives are at stake. The world has just lost millions to the coronavirus. Imagine the same again, or even much worse, due to our own folly. Esvelt fears that even if an outbreak was contained quickly, mere awareness of a laboratory leak can cause irreparable damage to public confidence and prompt people to doubt the safety of pandemic measures. More doubts, more deaths. Esvelt said that “Double or Nothing” is not a good description of it. “I’m just not comfortable risking so much of the biomedical enterprise on that toss of the dice.”
But a lab leak isn’t Esvelt’s main concern. What really worries him is bioterrorism. Imagine that researchers identify a pandemic-capable virus and share its genome sequence, as well as all the information necessary to replicate it, with the scientific community. Other scientists begin developing countermeasures, but now hundreds or even thousands of people have the ability to make something with the potential to kill millions; it’s not exactly a well-kept secret. Someone goes rogue, replicates the virus, releases it in an international-airport terminal, and, just like that, you’ve got a pandemic. Esvelt compares the danger to that of nuclear proliferation; in his view, nothing else comes close in sheer destructive capacity.
Part of what’s so tricky about this whole debate is that, unlike most competing public-health priorities, the question of whether to conduct this research is inherently zero-sum, in the sense that we can’t have it both ways. If we do the research and publish the results, we might improve our chances of preventing natural pandemics, but we necessarily create a security risk. If we don’t do the research, we don’t create the security risk, but we also don’t reap the preventive benefits. You have to make the choice.
To make a decision with certainty, Esvelt explained to me that we would need to calculate the benefits and risks of spreading a deadly pathogen in a laboratory. So far, he said, few people have done that math. Plenty of researchers, he thinks, may not even realize the trade-off exists: “If you’re the kind of person that has dedicated your life to working in comparative obscurity on a shoestring budget, trying to prevent a calamity with no real hope of reward or recognition, the very possibility that humans could be malevolent enough to deliberately cause the catastrophe that you fear–honestly, I think they’re such good people that it just never occurs to them.”
The gain-of-function proponents I spoke with were by no means naive to the threat of bioterrorism. Rasmussen stated that it was both presumptuous as well as patronizing to assume that biosecurity risk never occurs to virologists. Rasmussen pointed out that all lab personnel undergo background checks, and are regularly trained on how to mitigate security threats (extortion, blackmail, etc.). Excessive oversight poses its own threat, her camp argues. The risk, Lowen told me, is that we “lose the war against infectious diseases by winning a battle on research safety.” “There is a great threat out there from nature,” she said. “Relative to that, the threat from laboratory accidents is small.”
Regardless of who’s right, Esvelt’s broader point is a good one: How you feel about this research–and which kind of pandemic worries you most–is, in the end, a question of how you feel about human nature and nature nature and the relation between the two. Which should we fear more–our world or ourselves?
The post Which Kind of Pandemic Do You Fear Most? appeared first on The Atlantic.