Pentagon blacklists Anthropic, labels AI company "supply chain risk"
Add Axios as your preferred source to
see more of our stories on Google.

Defense Secretary Pete Hegseth. Photo: Aaron Ontiveroz/The Denver Post
President Trump announced Friday the U.S. government would blacklist Anthropic, in the most consequential and controversial policy decision to date at the intersection of artificial intelligence and national security.
The latest: Defense Secretary Pete Hegseth followed up by confirming the Pentagon will also declare Anthropic a "supply chain risk," a penalty usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
- "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security," Hegseth tweeted.
- "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
The big picture: Anthropic rebuffed the Pentagon's demand to lift all safeguards on the military's use of its model, Claude, due to its concerns about the use of AI for mass domestic surveillance and the development of weapons that fire without human involvement.
Breaking it down: Following Trump and Hegseth's announcements, the Pentagon will now move to sever its contract with Anthropic — valued at up to $200 million — and require companies it works with to certify they don't use Claude in those workflows. Trump said Anthropic would also be barred from all other government work.
- There will be a six-month wind-down period to allow the Pentagon, its customers and other government agencies to onboard alternatives to Claude.
- "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote on Truth Social.
Trump's announcement is particularly extraordinary because Claude is the only AI model currently used in the military's classified systems.
- It was used in the operation to capture Nicolás Maduro and could conceivably be used in a potential military operation in Iran.
- Defense officials praised Claude's capabilities in conversations with Axios, with one admitting it would be a "huge pain in the ass" to disentangle.
- The decision is also complicated for AI software firm Palantir, which uses Claude to power its most sensitive work with the military and will likely now need to strike a deal with one of Anthropic's competitors.
Trump wrote that the U.S. would "NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS."
- "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he wrote.
The other side: Around 24 hours earlier, Anthropic CEO Dario Amodei had rejected what the Pentagon had called its "best and final offer" in a public statement laying out the company's concerns and stating "we cannot in good conscience accede to their request."
- Emil Michael, the senior Pentagon official who has been steering the negotiations with Anthropic and other AI firms, responded by calling Amodei a "liar" with a "God complex" who was "ok putting our nation's safety at risk."
- In his statement, Amodei said that if the Pentagon decides "to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
What to watch: Anthropic has not yet said whether it would attempt to fight the designation in court.
- The company has hundreds of millions of dollars on the line, given the possibility that many firms that work with the government, or hope to in future, will steer clear of Claude.
- Anthropic has been growing at a staggering rate and establishing dominance in some key enterprise use cases for AI.
- Amodei and his company have garnered widespread praise in the past 24 hours for their principled stand. It's not yet clear how expensive that stand will be.
The Pentagon argues that there are many gray areas around what constitutes mass surveillance or autonomous weaponry, and that it's unworkable to have to litigate individual cases with a private company.
- Their argument is that once the military buys a tool, it has its own standards and procedures to determine whether and how to use it. They therefore demanded all AI firms make their models available for "all lawful purposes."
- Hegseth has railed against "woke AI," and the Trump administration has grown increasingly antagonistic toward Anthropic in particular, even as the military has grown more reliant on its model.
- "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the tense meeting between Hegseth and Amodei on Tuesday.
What's next: Elon Musk's xAI recently signed an agreement to let the military use its model, Grok, in classified systems. Sources say it's unlikely to be a like-for-like replacement for Claude.
- Google's Gemini and OpenAI's ChatGPT are both available in unclassified systems, and the Pentagon is accelerating conversations about bringing them into the classified space.
- Hundreds of employees from Google and OpenAI have signed a petition in the past 24 hours calling on their companies to mirror Anthropic's position.
What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
Editor's note: This story was updated with Hegseth's announcement that Anthropic will be designated as a supply chain risk.

