Axios AI+ Government

May 01, 2026
It's Friday, and we've got a look at how the White House is rethinking Anthropic.
Join Axios Live in Arlington on Tuesday, May 12, at 5:30pm ET for an event looking at how federal policy and education institutions are innovating to prepare the AI-era workforce, featuring Sen. Todd Young (R-Ind.) and Northern Virginia Community College president Anne M. Kress. RSVP here.
Today's newsletter is 1,419 words, a 5.5-minute read.
1 big thing: Washington has a new Anthropic problem
Anthropic is both a risk and a necessity to AI progress, at least in the White House's telling.
Why it matters: That tension is shaping AI policy in real time, as the White House realizes it needs the company it's been fighting.
Driving the news: The White House is inching toward welcoming Anthropic back into the government fold after months of animosity and legal battles with the Pentagon because its most advanced models are too powerful to ignore.
The big picture: The Trump administration's goal on AI has been to be as hands-off and pro-innovation as possible. But as models get more powerful, that stance is breaking down.
- Washington is stepping in, shaping policy around who gets access to the most advanced systems and how they're deployed, driven by growing urgency over what the technology can do.
Flashback: The standoff started earlier this year when talks broke down over how the Pentagon could use Anthropic's AI in classified settings.
- That led to public spats, lawsuits, new deals struck with other frontier AI companies, and the unprecedented move to label Anthropic as a supply chain risk, a designation usually reserved for foreign adversaries.
- The White House at one point considered an executive order meant to weed out Anthropic from government systems entirely, as Axios previously reported.
Yes, but: The government couldn't ice Anthropic out for long.
- That realization sank in as its powerful model Mythos rolled out and agencies — despite the Pentagon spat — started testing it along with other AI companies' most advanced cyber models.
- As the Pentagon and Anthropic continued to battle in court, the White House kicked off a thaw with the company.
What they're saying: "When you're regulating by contract, it's basically creating a huge amount of power in the agency that's negotiated that contract and then becomes effectively the de facto policy of the administration," Jessica Tillipman, associate dean for government procurement law studies at George Washington University, told Axios.
- "When other agencies don't like that decision, that's when you start to see these carve-outs because they don't want to be bound by what was effectively a failed negotiation by the Pentagon."
Responding to a Wall Street Journal report that said the government opposed Anthropic's plans to expand access to Mythos to more companies due to a lack of compute, an Anthropic spokesperson said in a statement:
- "We are working closely with the US government to quickly advance shared priorities, including cybersecurity and America's lead in the AI race."
- "Compute is not a constraint ... and we are engaged in collaborative conversations with the government on bringing additional parties in. We appreciate the administration's continued partnership as cyber capabilities advance."
The White House is mulling an executive action that could address both government use of advanced AI systems and carve a path forward in its dispute with Anthropic, Axios scooped earlier this week.
- Talks are in flux, per sources familiar with meetings with the White House this week, and no draft guidance addressing these issues is final.
- Tech and cyber companies, along with trade groups, have been participating in meetings broadly touching on these topics.
What we're watching: It's unclear whether any executive action will resolve the standoff with the Pentagon, which hasn't dropped its disdain for the company.
- The Pentagon today announced an agreement with seven other top AI companies to use their advanced capabilities on classified networks for "lawful operational use."
- Defense Secretary Pete Hegseth on Thursday said Anthropic is "run by an ideological lunatic who shouldn't have a sole decision-making over what we do" during testimony on Capitol Hill.
Maria Curi contributed to this report.
2. Sanders splits with Washington on AI arms race
Sen. Bernie Sanders (I-Vt.) is calling on Washington to collaborate with China on AI, breaking from a bipartisan approach that frames AI development as a race between the two countries.
Why it matters: Sanders, who is writing the progressive playbook on AI, is shifting the focus away from U.S.-China competition and toward international cooperation around AI safety.
Driving the news: Sanders this week brought together researchers from the U.S. and China to discuss the "existential threat" of AI and how the two countries could work together.
- "In the last five months, I've seen the emergence of what I like to joke with my wife as the Bernie to Bannon coalition," said Massachusetts Institute of Technology professor Max Tegmark, referring to MAGA influencer Steve Bannon.
- "Extremely unlikely bedfellows from across the whole political spectrum saying, 'This is crazy. This is absolutely nuts. Let's do something about it.'"
- Tegmark zeroed in on how chatbots may be harming young people: "For someone to say that we must legalize this kind of evil for profit because China makes absolutely no sense."
Panelists called on scientists in both countries to work together to set global safety standards.
- "The first thing we have to change is the inaccurate narrative that the U.S. and China are engaged in [an] AI race," Tsinghua University professor Xue Lan said. "It's a global race to see who can really develop the best model that can be safe and reliable."
- Xue acknowledged there's a real geopolitical rivalry, but said that there should be "safe zones" for cooperation on AI safety.
3. Exclusive: AI use booms in states
State governments are rapidly embracing AI by launching low-risk pilot programs, but haven't yet figured out how to measure their impact, according to a new analysis from Code for America shared exclusively with Ashley.
Why it matters: AI promises to make government more efficient and cut costs.
- But in practice, that's proving difficult to quantify and measure — and in the short term might mean more work for government staffers before there are real results.
Driving the news: Utah, New Jersey, Pennsylvania, North Carolina, Maryland, Texas and Vermont are the leading states on AI use in terms of "building institutional capabilities required to govern AI as a long-term public sector asset," per the report.
- The report finds that states across the U.S. are at vastly different levels of AI-readiness and fluency.
- West Virginia, Wyoming, Nebraska, Alaska, Florida and Kansas rank among the "earliest" states in their AI journeys, per the report.
Zoom in: The analysis assessed states on conditions for successful AI deployment, including leadership, training capacity, infrastructure, level of experimentation via pilots, full-on production via embedding in government operations and measurable impact.
- Conducting research, automating workflows, detecting fraud, and deploying consumer-facing chatbots are common examples of how states are using AI.
4. The Output: The GUARD Act, surveillance pricing and more
Here's our guide to catch you up on the AI policy news you may have missed this week:
💬 Chatbot legislation updates: The GUARD Act advanced out of the Senate Judiciary Committee yesterday in a 22-0 vote, sending it to the full chamber for consideration.
- The bipartisan bill led by Sen. Josh Hawley (R-Mo.) would ban chatbot companions from interacting with kids under 18.
- It would also require bots to disclose that they're not human or licensed professionals, and create criminal penalties for companies that expose kids to sexual content via chatbots.
- What we're watching: Sens. Ted Cruz (R-Texas) and Brian Schatz (D-Hawai'i) earlier this week introduced a competing bill, the CHATBOT Act, which would require AI companies to build "family accounts" so parents can control how kids use chatbots.
🇪🇺 Chips Act 2.0: The EU is preparing a revised Chips Act that would let Brussels invest directly in semiconductor fabs, per Bloomberg.
🤝 Middle powers unite: U.K. Technology Secretary Liz Kendall this week said that Britain will work with other "middle powers" like Germany, France, Japan and Canada on AI security.
- "This government believes AI sovereignty is not about isolationism or attempting to pull up the drawbridge and go it alone," she said in a speech at the Royal United Services Institute.
- "There is more we can and must do to build our sovereign capabilities and increase our leverage by working with our allies, especially other middle power nations."
🛒 Data pricing ban: Maryland has become the first state to ban "surveillance pricing" in grocery stores, with Gov. Wes Moore signing a bill this week blocking retailers from using personal data to set higher prices.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government






