OpenAI warns models with higher bioweapons risk are imminent
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing.
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.
Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework.
- As a result, the company said in a blog post, it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
- OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level."
Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons.
- Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things.
- "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with."
Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.
- Heidecke acknowledged that OpenAI and others need systems that are highly accurate at detecting and preventing harmful use.
- "This is not something where like 99% or even one in 100,000 performance is sufficient," he said.
- "We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing."
The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability.
- When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats.
- Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks.
What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
- OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios.
- "We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said.
- Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build-out around the world, for the pipes to be really U.S.-led."
