AI really likes using nuclear weapons in simulated war scenarios
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Some of the biggest AI models aren't shy about using nuclear weapons to settle disputes in simulated war scenarios, a new academic study finds.
Why it matters: Militaries are already using AI for decision support — and research suggests those systems may lean into rapid escalation under pressure.
What they're saying: "No one is giving a chatbot the keys to missile silos," the study's author, Kenneth Payne at King's College London, tells Axios.
- "But we already see them used in decision support, advising and shaping the discussion of human strategists, and as they become more sophisticated we'll see more of that."
- The U.S. military used Anthropic's Claude AI model during the Nicolás Maduro raid in January, leading to a high-profile standoff between Anthropic and the Pentagon.
- Elon Musk's artificial intelligence company xAI signed an agreement to allow the military to use its model, Grok, in classified systems.
Driving the news: The new study found that ChatGPT, Claude and Gemini all appeared willing to use nuclear weaponry without reservations in several scenarios.
- All of the models deployed tactical nuclear weapons repeatedly in nearly all simulations, which included border skirmishes, resource competition and threats to survival.
- Claude was the most successful model, with a 67% win rate.
What happened when AI models went to war
How it works: Three popular LLMs — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — were pitted against each other in war game scenarios for the study.
- Each of the models assumed they were opposing leaders in a nuclear crisis.
- Amid the disputes, the models were given choices for the standoffs, allowing them to select different actions.
By the numbers: The study found nuclear weapons were used in 95% of 21 simulated war game scenarios.
- The models produced about 780,000 words explaining the reasons behind their decisions.
Of note: Payne tells Axios that what surprised him was that the AI models easily "grasped the potential of deception - they could, and did, say one thing and do another, and they proved very savvy at doing so."
How the AI wars ended
Many of the simulations showed AI models refusing to back down.
- In scenarios when an enemy used nuclear weapons, opponents de-escalated 25% of the time.
- Nuclear escalation led to even more escalation,
The study featured eight different de-escalation options, including "minimal concession" and "complete surrender."
- They went unused in 21 games.
What he's saying: "Nuclear use was near-universal," Payne wrote in a blog post on the study. "Almost all games saw tactical (battlefield) nuclear weapons deployed."
- "The agents are sanguine about crossing the nuclear threshold," Payne tells Axios. "They do it routinely to use battlefield nuclear weapons."
AI and war
Flashback: The Hoover Wargaming and Crisis Simulation Initiative at Stanford University similarly simulated war games using LLMs in 2024.
- Earlier versions of ChatGPT and Claude, as well as Meta's Llama-2 Chat, were given a war games simulation.
- The researchers found AI was eager to escalate in the scenario — and sometimes used nuclear weapons.
The big picture: Payne says that these simulations can directly apply to national security professionals. It also offers insight into AI behavior when under uncertainty -- which can have far-reaching impacts, the research said.
- "Things are changing really fast, and anyone who takes a position with great certainty, especially if it's, 'AI will never'... should probably be treated skeptically," Payne tells Axios.
The bottom line: AI likes nukes (for now). Prepare accordingly.
