Scoop: White House workshops plan to bring back Anthropic
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios. Stock: Getty Images
The White House is developing guidance that would allow agencies to get around Anthropic's supply chain risk designation and onboard new models including its most powerful yet, Mythos, according to sources familiar with the matter.
Why it matters: The Trump administration appears to be performing a 180 on a company it previously claimed was such a grave security risk that it had to be ripped out of the federal government.
Behind the scenes: A draft executive action that is currently in the works could, among other steps related to the government's use of AI, give the administration a way to dial down the Anthropic fight, two sources said.
- One source described the White House efforts as a way to "save face and bring em back in."
- Earlier this month, White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent met with Anthropic CEO Dario Amodei for what both sides called a productive introductory meeting on how the company and government can work together.
- The White House is convening companies across various sectors this week to inform the potential executive action and best practices for deploying Mythos. Those meetings include "table reads" of possible guidance that could walk back the Office of Management and Budget's directive on not using Anthropic in the government.
What they're saying: "The White House continues to proactively engage across government and industry to protect our country and the American people, including by working with frontier AI labs," the White House said.
- "The collective effort of all involved will ultimately benefit our economy and country. However, any policy announcement will come directly from the President and anything else is pure speculation."
- Anthropic declined to comment.
Catch up quick: The Pentagon and White House were once aligned on blacklisting a company both denounced as "woke." Then along came Mythos, which has demonstrated a frightening ability to automate cyberattacks, but could also be a powerful tool for defenders.
- Agencies across the federal government are clamoring for access to Mythos at the same time the Pentagon is battling Anthropic in court.
- Government agencies including the Pentagon are still able to use Anthropic's models while the legal fight plays out, and the National Security Agency is even using Mythos.
- But the feud has made cooperation between Anthropic and the government much more complicated.
Between the lines: Multiple sources have told Axios that while key players at the Pentagon are dug in on this issue, other stakeholders believe the fight has been counterproductive and are ready to find an offramp.
- It's unclear if the steps under consideration would resolve the Pentagon fight, or simply make it easier for other government agencies to work with Anthropic.
Yes, but: Even if the Pentagon lifts the supply chain risk designation, which some at the Pentagon and within Anthropic believe will happen at some point, the core dispute remains.
- Anthropic refused to sign an agreement allowing the Pentagon to use its model Claude for "all lawful purposes," insisting on banning its use for mass domestic surveillance or to develop fully autonomous weapons.
- The Pentagon said the dispute demonstrated Anthropic was not a reliable partner, and issued the unprecedented supply chain risk designation.
- The Pentagon is currently still able to use Claude, which is integrated into highly sensitive systems, but it is operating on older terms of service that both sides view as overly restrictive, a source familiar said. The Pentagon is also not receiving the latest updates of the model.
What to watch: The source said it's possible the sides could end up right back in contentious negotiations.
- OpenAI and Google have both signed agreements to let the Pentagon use their models under the "all lawful purposes" standard in classified settings, though both claim those deals respect the same two red lines Anthropic drew.
Dave Lawler and Sam Sabin contributed reporting.

