Anthropic says Pentagon's "final offer" is unacceptable
Add Axios as your preferred source to
see more of our stories on Google.

Anthropic CEO Dario Amodei. Photo: Chance Yeh/Getty Images
Anthropic on Thursday said there has been "virtually no progress" on negotiations with the Pentagon, as CEO Dario Amodei said it could not accept what defense officials had labeled their final offer on AI safeguards.
Why it matters: A deadline of Friday at 5:01pm is fast approaching for Anthropic to let the Pentagon use its model Claude as it sees fit or potentially face severe consequences.
What they're saying: "The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," Anthropic said in a statement.
- "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months."
- Anthropic is not walking away from the table, even as significant gaps remain with less than 24 hours before the deadline. The company expects further negotiations.
The Pentagon did not immediately respond to a request for comment on the statement.
Catch up quick: The Pentagon and Anthropic are in a high-stakes feud over the limits Anthropic wants to place on the department's use of its AI model Claude: no mass surveillance or autonomous weapons.
- The Pentagon this week started laying the groundwork for one consequence — blacklisting the company as a supply chain risk — by asking defense contractors including Boeing and Lockheed Martin to assess their exposure to Anthropic.
- Alternatively, Hegseth threatened to invoke the Defense Production Act to compel Anthropic to provide its model without any restrictions. Such an order may be on murky legal ground.
The Pentagon's threats "are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," Amodei said in a blog post.
- "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," he added.
The big picture: The Pentagon's requirement that AI models be offered for "all lawful purposes" in classified settings is not unique to Anthropic.
- While Anthropic has been the only model used in classified settings to date, xAI recently signed a contract under the all lawful purposes standard for classified work.
- Negotiations to bring OpenAI and Google into the classified space are accelerating.
What's next: Amodei said the company remains committed to continuing talks.
- But if the Pentagon decides to offboard Anthropic, Amodei said the company "will work to enable a smooth transition to another provider."
Editor's note: This story has been updated with additional details throughout.

