Axios AI+ Government

May 08, 2026
It's Friday, and we've got the latest signals from the Trump administration about possible rethinking of its approach to AI oversight.
Today's newsletter is 1,246 words, a 4.5-minute read.
1 big thing: Behind Washington's AI safety pivot
The Trump administration appears poised to reshape the U.S. approach to AI security ahead of President Trump's trip to China next week.
Why it matters: What happens next could be the turning point for how the Trump White House handles the proliferation of the most advanced AI models in the world.
- And there are new reports of possible coordination between the two countries that are fiercely competing on AI development — a signal that neither side wants a dangerous arms race.
Driving the news: A fire-alarm moment is happening:
- The pro-AI growth administration is realizing it may need more guardrails than originally thought, and may not want to go it alone.
- There are new signs that the administration may consider executive action to rein in the most powerful AI models.
- At the same time, the U.S. and China are weighing official discussions about AI, and it could be added to next week's Beijing summit between Trump and Chinese leader Xi Jinping, the Wall Street Journal reported this week.
What they're saying: National Economic Council director Kevin Hassett suggested this week that the administration is considering an executive order, hinting at an oversight process for new AI models that would be similar to Food and Drug Administration approval of new drugs.
- "We're studying, possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that they're released to the wild after they've been proven safe, just like an FDA drug," Hassett told Fox Business on Wednesday.
White House chief of staff Susie Wiles also weighed in with a more general statement on X Wednesday night:
- "When it comes to AI and cyber security, President Trump and his administration are not in the business of picking winners and losers. This administration has one goal; ensure the best and safest tech is deployed rapidly to defeat any and all threats," Wiles wrote.
- "We appreciate the effort being made by the frontier labs to ensure that goal is met."
The latest: The government appears to be mulling a number of executive actions to possibly announce before Trump goes to China, sources tell Axios, cautioning that all talks are in flux and nothing is final.
- As Axios has been reporting, the possible measures include an executive action focused on AI and cybersecurity; one related to deployment and testing of new AI models; and another that could be some form of licensing or approval around limitations a model provider could place on government use of AI.
- This week, White House meetings have included both tech and financial services companies, one source familiar with the discussions told Axios, with Treasury Secretary Scott Bessent wanting banks to be looped into whatever happens.
- Google, xAI and Microsoft also signed pre-deployment testing deals this week with the Center for AI Standards and Innovation, part of the Department of Commerce, and announced continued deals with Anthropic and OpenAI.
The other side: "The White House continues to balance advancing innovation and ensuring security in our AI policymaking. The Chief of Staff's X post reiterated this longtime commitment," a White House official said.
Reality check: A rhetorical shift is just that until the administration announces concrete steps beyond this week's hints.
2. Pentagon CTO: No Anthropic resolution in sight
There's no resolution between Anthropic and the Pentagon coming any time soon despite new agreements with other frontier AI companies, chief technology officer Emil Michael said Thursday.
Why it matters: Michael's comments come as the White House considers potential executive action around AI testing and safety, moves that could eventually allow government agencies to work with Anthropic again.
- Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX announced agreements with the Pentagon last week.
The big picture: The Pentagon is not ready to forgive Anthropic following its long spat over the use of its technology, even if the company's own powerful cyber model Mythos has completely changed the discourse around AI regulation in Washington.
What they're saying: The agreements were "a statement by the biggest tech companies in the world involved in the AI space ... saying we support the Department of War, we support the U.S. government, and we support them using our services for all lawful use cases," Michael said during an onstage interview with New York Times national security correspondent David Sanger at a conference in Washington.
- "That's a counter statement to what we heard before," Michael said, that "really made us more comfortable that the industry does want to support the U.S. government."
- Asked by Sanger if he sees Anthropic's issues with the government being resolved, Michael said: "Not at the Department of War, no."
3. Colorado lawmakers propose new AI rules
Colorado lawmakers plan to scrap their first-in-the-nation artificial intelligence law and replace it with rules designed to appease the tech industry, Axios Denver's John Frank reports.
Why it matters: The long-awaited, hotly contested bill could define how AI is governed in Colorado and serve as a model for future regulation.
Driving the news: The latest regulatory framework would target automated decision-making technology that makes "consequential decisions" related to an individual's compensation, eligibility for and access to education, employment, housing, financial services, insurance and health care.
- Any entity using AI must notify consumers and allow them to review and correct any inaccurate personal data used in decision-making.
- Liability for violations of state discrimination laws may fall on the AI developer or the entity that deploys the product.
The law would take effect Jan. 1, 2027, to give the attorney general's office time to craft disclosure requirements and enforcement practices.
What they're saying: "This bill strikes an appropriate balance of protecting consumers while not being onerous on developers or the businesses [that] use AI technology," said state Sen. Robert Rodriguez (D-Denver), the bill's main sponsor.
Sign up for Axios Local, including Axios Denver, where this story first appeared.
4. The Output: EU pause, Connecticut rules and more
Here's our guide to catch you up on the AI policy news you may have missed this week:
👀 Releasing the brakes: EU legislators have agreed to pause the restrictions on high-risk uses of AI in a major change to the groundbreaking law, per Politico Pro.
- There's also going to be a carveout from the rules for AI that's used in industrial applications.
🚨 New rules alert: Connecticut lawmakers gave final approval to an AI regulation bill that would set up oversight committees and workforce development programs, Government Technology reports.
- It would also try to crack down on AI-driven discrimination in hiring and the way it sorts through resumes. Gov. Ned Lamont's office says he'll sign it into law.
💻 New kids' safeguards: New York Gov. Kathy Hochul announced an agreement on a state budget for fiscal 2027 that includes a collection of AI and tech safety measures for children, including:
- Disabling integrated chatbots for kids.
- Requiring platforms to use privacy protection settings for children.
- Requiring kids under 13 to get their parents' approval for new connections for online gaming.
🕰️ R&D time: A bipartisan group of senators introduced a bill to create the National Artificial Intelligence Research Resource, which would give researchers and educators the resources to advance AI research and development.
- The bill is sponsored by Sens. Todd Young (R-Ind.), Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.), and Cory Booker (D-N.J.).
Thanks to David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government






