Another AI threat: The next pandemic
Researchers are finding new ways for AI to be weaponized against us — and bioterrorism enabled by AI tops the list.
Why it matters: Bioattacks have been assumed to be the province of governments, but now we face the prospect of rogue individuals and organizations gaining the capability.
- Converting a pathogen into a terror attack previously required many Ph.Ds — but recent genetic engineering advances, combined with AI, have dramatically reduced the skills, money and time needed to engineer a catastrophe.
Yes, but: AI could also help biodefense.
- Biotech researchers Axios spoke to said it may be possible to create antibodies for viruses from scratch in around 2025 — blunting the death and injury rogue actors could inflict. But that's a big maybe.
Driving the news: MIT researchers asked undergraduate students to test whether chatbots "could be prompted to assist non-experts in causing a pandemic," and found that within one hour the chatbots suggested four potential pandemic pathogens.
- The chatbots helped the students identify which pathogens could inflict the most damage, and even provided information not commonly known among experts.
- The students were offered lists of companies who might assist with DNA synthesis, and suggestions on how to trick them into providing services.
Step back: Biological warfare dates back centuries — but generative AI changes the dynamics.
- AI may have played a role in previous "Biodisaster X" scenarios. But before the advent of generative AI, it was cast as the enabler of fast vaccines or 24/7 monitoring, not as an accelerator of disaster.
What's happening: Step-by-step assembly protocols for producing infectious viruses are now widely available.
- While this information is not yet available for pandemic-capable viruses, would-be virus makers can still turn to commercially available chatbots for coaching through these steps,
- The spread of open source large language models also means the technology can be used without oversight.
The next level of danger comes from potential misuse of AI systems designed to advance biotech.
- In this environment, open data systems — like Alphafold, from Google DeepMind, which predicts 3D structures of proteins — can become a vulnerability.
Be smart: Identifying or designing a synthetic virus on the screen is not the same as engineering it in a lab and then successfully releasing it.
Threat level: Some experts view the risk of AI-assisted bioterror as greater than that of nuclear weapons.
- "Even relatively mild pandemic viruses can kill more people than any nuclear device," writes MIT's Kevin Esvelt.
- And it's now much easier to access the materials needed for bio attacks than for a nuclear weapon, since access to nuclear materials is strictly regulated.
- Yet many life sciences leaders think AI threats are a distraction from fixing mundane elements of pandemic handling that went poorly during COVID — from uneven healthcare access to global vaccine distribution failures and politicization of policy and data.
What they're saying: "A pandemic virus synthesized anywhere will spread everywhere," Esvelt told Axios.
- Esvelt says he opposes the sharing of complete genome sequences of new viruses — to avoid handing "blueprints to rogue states, extremists, and zealots" — and would like to see a pandemic test ban treaty modeled after the Nuclear Test Ban Treaty.
- The MIT researchers recommend "pre-release evaluations of LLMs by third parties, curating training datasets to remove harmful concepts, and verifiably screening all DNA generated by synthesis providers or used by contract research organizations."
- Eric Schmidt, the former Google CEO who co-chaired the National Security Commission on Artificial Intelligence, called AI misuse in biology "a very near-term concern" in September.
- A network of EU think tanks recommends export controls for "dual-use life sciences technologies"" (those that can be used to both help or harm people).
The other side: Biotech researchers are also using AI for pandemic prevention.
- The Human Immunome Project — which is developing a simulation of the human immune system — hopes to "prevent pandemics before they spread" by developing vaccines within weeks of outbreaks.
Flashback: Fears of dual-use biotech first hit the spotlight in 2022 when drug researchers explained how their AI-based software could be misused to create 40,000 chemical weapons and toxic compounds overnight.