AI biosecurity concerns prompt call for national rules
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Biosecurity experts are calling on governments to set new guardrails in an effort to limit the risks posed by advanced AI models being applied to biology.
Why it matters: AI models trained on genetic sequences have double-edged potential to help scientists design new medicines and vaccines, but can also be used to create new or enhanced pathogens.
Where it stands: Experts from OpenAI and RAND, among other institutions, say today's large language models don't increase the risk of a bioweapon being created in part because there's not enough data to train them.
- There's an impression today that "producing biological weapons with all of the information that AI can provide is easy, straightforward," Sonia Ben Ouagrham-Gormley, deputy director of the Biodefense Graduate Program at George Mason University, said at a Center for New American Security event on Wednesday.
- But "producing biological weapons is very complex, very complicated."
Yes, but: Many researchers say it's only a matter of time before AI models improve and it becomes a possibility.
- "I think we need to listen to the AI developers about the potential capabilities of the next generation," Anita Cicero, deputy director of the Johns Hopkins Center for Health Security, said at the event.
- She also raised the possibility of automated labs and labs running remotely through the cloud using trial-and-error to try to make a new pathogen, which could mean less expertise is needed to conduct experiments.
What they're saying: AI developers committing to evaluate models is "important but cannot stand alone," Cicero and her co-authors wrote in the journal Science.
- They call on governments to evaluate models trained on large amounts of biological data or especially sensitive data before they are released.
- That could address potential risks without hampering academic freedom, they argue.
- The group also wants companies and institutions that synthesize nucleic acids — turning genetic sequence information into physical molecules — to screen their customers and their orders. Beginning in October, federally funded researchers will be required to get their synthetic nucleic acids from providers who screen purchases.
Friction point: Some researchers say more work should be done first to understand what AI models can do in laboratory settings.
- Ouagrham-Gormley argued that before AI biological models are regulated, "We absolutely need to better understand the capability itself, and then that will allow us to assess better the risks associated with those [models]."
Between the lines: A lot of work developing AI biological models is done in the private sector.
- Government should focus on "advanced biological models that pose potential high-consequence threats, whether or not they were developed with federal funding assistance," Cicero and her co-authors wrote.
The big picture: It's incumbent on the U.S. and other leaders in the field to set up their own governance systems, Tom Inglesby, director of the Johns Hopkins Center for Health Security, told Axios.
- "Ultimately, we do think that we should be aiming for international harmonization, just like we do for other kinds of safety and security issues around science," he said.
