DHS: AI holds promises, risks for biosecurity
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A new Department of Homeland Security report calls out the double-edged potential of artificial intelligence to bolster and jeopardize U.S. biosecurity.
Why it matters: AI hype has penetrated almost every industry, including energy, health care, pharmaceuticals and defense, but major questions remain about its day-to-day utility and the risk calculus surrounding its use.
The findings released by DHS this week are from the AI and Chemical, Biological, Radiological and Nuclear (CBRN) Report to the President, which was delivered to the White House in April.
- The report stems from President Biden's executive order on AI signed last year, which requested an assessment of the risk of AI being used to develop biological weapons.
State of play: AI is already influencing scientific research, which could have "positive and negative impacts, depending on the intent of the users and the quality of the data," according to the report.
- Publicly available AI models can enhance scientists' ability to design new molecules and understand how proteins and toxins interact in the body.
- AI is also augmenting agricultural practices and giving a boost to crop yields, the report notes.
- But some current chemical and biological AI models are plagued by high failure rates, confabulations and data-validity concerns.
- The report says further development of models should improve their accuracy. But AI experts suggest some of those errors may be harder to fix than others and may even be central to how these models work.
Yes, but: The report argues AI lowers the barrier of entry for carrying out experiments to design new molecules, which raises the risk of malign actors conceptualizing and conducting chemical and biological attacks.
- The "dual-use nature of the basic science information involved, and inconsistent access to relevant CBRN expertise, make it vital to encourage continued interaction among industry, government, and academia," it states.
At the same time, CBRN prevention, detection, response and mitigation capabilities could benefit from the use of AI tools, according to the report.
- It calls for the development of AI tools that could help attribute chemical or biological attacks or agents to their source or monitor compliance with international weapons agreements.
- Machine-enabled pattern recognition could spot signs of an attack before it ever unfolds.
- AI is already being applied to passenger and cargo screenings. Other "fertile areas" exist, the report states.
The big picture: The report hits a bigger nerve at the center of the debate over access to AI models.
- On one side are massive tech companies that stand to profit from closed models, and on the other are researchers, startups and other firms that seek access to how the models work and the option to build on them.
- Many national security officials are sounding the alarm about open models, echoing the report's argument that they will make it easier for bad actors to engineer pathogens and other potentially dangerous agents.
- Leaders are also worried about AI-fueled misinformation, which could sway health-care decisions or emergency responses.
What to watch: The potential risk of AI to biosecurity is also hotly debated.

