Chatbots: Can They Help Build Bioweapons? – DNyuz

Can Chatbots Help You Build a Bioweapon?

Human extinction, mass unemployment, cheating on exams–these are just some of the far-ranging fears when it comes to the latest advances in artificial intelligence chatbot capabilities. Recent concern, however, has shifted to the possibility that chatbots could cause serious harm in another field: by making it easier for a bioweapon to be constructed.

Human extinction, mass unemployment, cheating on exams–these are just some of the far-ranging fears when it comes to the latest advances in artificial intelligence chatbot capabilities. Recent concern, however, has shifted to the possibility that chatbots could cause serious harm in another field: by making it easier for a bioweapon to be constructed.

These fears are based in large part on a report from a group at the Massachusetts Institute of Technology, as well as testimony in the U.S. Congress from Dario Amodei, the CEO of AI company Anthropic. They argue that chatbots could provide users with step-by-step instructions to genetically engineer and produce pathogens, such as viruses or bacteria. A chatbot could be used to create and use a bioweapon, experts say, without any prior scientific knowledge.

This is a serious threat. Chatbots are able to make highly technical information easier to understand. Chatbots are not the only gatekeepers to information. As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.

Scientific knowledge, particularly online, is indeed plentiful for an interested learner. And usually for good reason: Open, transparent, and accessible science can push advances in biotechnology and medicine. When it comes to increasing basic science literacy, education and outreach are key.

During my doctoral research in biochemistry, I learned how important it is to have accurate and clear information in the lab. The training that I received, supplemented by information I found online, taught me everything from how to use basic laboratory equipment to how to keep different types of cells alive. It doesn’t require a bot or even a degree to find the information.

Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources. There are many YouTube playlists that cover everything, from holding a pipette to balancing a centrifuge. They also show how to visually inspect the samples and grow cells. When experiments don’t go as planned, researchers can crowdsource troubleshooting help from message boards such as ResearchGate, a resource that I found to be a lifesaver in graduate school.

Online instructions are more than just the basics for those willing to dig deeper. The scientific method is based on scientists meticulously detailing how they perform their experiments to enable other researchers to repeat the work. Showing your work is important since unreliable results can waste time and resources; for instance, a 2015 study estimates that U.S. companies and research institutions alone spend $28 billion per year on unreproducible preclinical research.

It is true that finding information on how to build a biological weapons may not be as simple as the above examples. Making or modifying a virus, for example, uses different steps, resources, and terminology than genetically engineering bacteria, like the high school students and congressional staffers did. A scientific foundation will give some users the confidence and technical skills to try these difficult experiments. For others, a chatbot could help to overcome this initial learning curve.

In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. It’s still reasonable to be concerned that the extra assistance could make a difference for malicious actors. The perception of a bot acting as a bio assistant could be sufficient to engage and attract new actors. This is true regardless of the extent to which the initial information had been spread.

If the barrier to information is already low, what can we do to actually make things safer?

As a first step, you can still come up with safeguards when developing chatbots. Preventing a chatbot from detailing how to make anthrax or smallpox is a good first step, and some companies are already starting to implement safeguards. A comprehensive biosecurity plan should take into account the fact users may be able to jailbreak safety measures and that relevant information is still available through other sources.

Secondly, we can think more critically about how to balance security with scientific openness for a small subset of scientific results. Some publications in scientific journals contain research with dual use that has been conducted legitimately and under proper supervision, but which could be used by a malicious person to cause harm. While current policies that regulate so-called risky research include directives to responsibly communicate results, experts have identified a need for more clearly defined and uniform publication policies.

Finally, we can add or strengthen barriers in other places, including the acquisition of physical materials. Some scenarios of biological misuse include designing and ordering customized strands from mail order companies. By requiring companies to screen all orders, they could prevent malicious actors from obtaining potentially dangerous DNA sequences, whether the information was obtained through Google, chatbots, or old-fashioned scientific journals.

The new executive order on AI signed by U.S. President Joe Biden on Oct. 30 is a big step in the right direction, as it includes a DNA screening requirement for federally funded research. To be truly effective, however, such screening requirements should apply to all custom DNA orders, not just those funded by U.S. agencies.

Furthermore, biosecurity is only one concern to be balanced with several others when it comes to equitable information access and biotechnology. Some experts expect the global bioeconomy to grow to $4 trillion annually by 2032, including creative solutions to climate change, food insecurity, and other global ills.

To achieve this, countries such as the United States need to engage the next generation of biological inventors and bolster the biomanufacturing workforce. Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits.

Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube–not just from ChatGPT.

The post Can Chatbots Help You Build a Bioweapon? appeared first on Foreign Policy.