Axios AI+ Government

March 06, 2026
It's Friday! We've got a packed issue for you today, so let's jump right in.
Today's newsletter is 1,509 words, a 5.5-minute read.
1 big thing: White House puts red state AI laws under scrutiny
GOP lawmakers in several red states want to pass AI safety bills, but their efforts are being chilled by the fear of angering the White House.
Why it matters: State lawmakers eager to tackle AI over concerns about kids, jobs and privacy are facing pushback from the White House, with tensions poised to spike next week.
- The Trump administration's pending list of "onerous" state AI laws could set up a federal crackdown on state regulation and reshape who writes the rules for AI.
Driving the news: The White House has made it clear — states should back off on AI laws in almost all cases until a federal framework passes.
- Next week, the administration is expected to announce which state-level AI laws it has identified as "onerous" that should be referred to the AI Litigation Task Force at the Justice Department, per President Trump's executive order.
What they're saying: This week, 50 Republican state lawmakers wrote to President Trump that they are "deeply concerned by the work of officials seeking to pressure lawmakers in Utah and other states to abandon legislation aimed at mitigating risks at leading AI labs and safeguarding constituents, including young people, from AI's worst harms."
- "We firmly believe state-led efforts are fully consistent with conservative principles and with your stated goals of promoting human flourishing while accelerating innovation."
In Utah, White House meddling completely knocked an AI bill off-course, Axios first reported, driving pro-AI safety advocates in the state to take out billboards targeting White House AI czar David Sacks.
- "The bill is unfortunately dead," Melissa McKay, policy director for Utah-based advocacy group Child First Policy Center, told Ashley. "The mid-session attack memo from the White House created enough confusion and conflicting opinions to doom it."
In Florida, the Gov. Ron DeSantis-backed AI Bill of Rights passed the state Senate this week, but intervention in the House will keep it from hitting the floor.
- State House Speaker Daniel Perez told reporters this week that he won't bring up the bill and he shares the White House's view on state AI laws.
- A spokesperson for DeSantis declined to comment on the future of the bill.
In Ohio, a bill that would ban AI from any form of legal personhood is currently being overhauled, said its sponsor, state Rep. Thad Claggett, who signed onto the 50-lawmaker letter.
- "We know how incredibly difficult it is for Congress to deal with leading-edge stuff, and that's okay. But, we are very interested in protecting our people, and so we're going to continue to work," he told Ashley.
- He said he will engage the White House at some point to see if they have any input on his bill, but he won't reach out until the bill is ready.
The other side: The White House did not directly respond to questions about the GOP state lawmaker letter, the AI litigation task force or the Ohio bill.
What we're watching: The executive order calls for the administration to identify laws, not bills that are in the works.
- So it's most likely that California and New York's AI frontier safety laws will be targeted first. Plus, Colorado's AI law was the only one specifically called out by name in Trump's order.
The bottom line: The tension between GOP state lawmakers who want to pass AI bills and a White House dead set on fending off as many state AI laws as possible is only heating up.
2. Anthropic CEO apologizes for leaked memo
The Pentagon has formally designated Anthropic a supply chain risk, as CEO Dario Amodei apologized yesterday for a leaked memo criticizing the Trump administration.
Why it matters: The dispute has raised fundamental questions over AI governance and cast a shadow over the industry's relationship with Washington.
State of play: Despite the apology, Anthropic still plans to sue over the Pentagon's designation of the company as a supply chain risk, which Anthropic says is narrow and only restricts certain activities.
"It was a difficult day for the company, and I apologize for the tone of the post," a new blog post from Amodei said yesterday, referring to an explosive internal memo to staff that put negotiations in jeopardy.
- "It does not reflect my careful or considered views. It was also written six days ago and is an out of date assessment of the current situation," Amodei said in the post, a copy of which was obtained by Maria.
- The company's "most important" goal now, he added, "is making sure that our war fighters and national security experts are not deprived of the important tools in the middle of war."
The other side: "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk," a senior Pentagon official said in a statement.
3. Draft AI chip rules clash with the White House
The Commerce Department is moving ahead with draft rules to expand federal oversight of AI chip exports, but President Trump opposes any approach that mirrors former President Biden's restrictions, a senior White House official told Maria.
Why it matters: The draft regulations would give the government sweeping control over AI chip exports abroad as companies like Nvidia and AMD seek to enter more markets.
What they're saying: The draft "does not reflect what President Trump has said on export controls nor does it reflect the direction of the Trump administration on encouraging export of the American AI stack," a White House official told Maria.
- Another administration official said it's a "very, very early set of ideas" and anything the administration does will be in line with the White House's AI action plan.
Behind the scenes: The 129-page draft making its way through the government is the sixth iteration from the Commerce Department's Bureau of Industry and Security, a source familiar told Axios.
- The draft made it out of the Commerce Department, which requires a signature from Secretary Howard Lutnick, and was sent to the Office of Management and Budget last week.
- OMB has until next Thursday to send back the results of the interagency review, the source added.
4. Senators want better data on AI job disruption
A bipartisan group of senators is urging the federal government to aggressively track how AI is impacting workers, per a letter shared exclusively with Axios.
Why it matters: Lawmakers say they need quality, real-time data to understand and respond to how the technology is reshaping the workforce.
Driving the news: The senators are calling on Labor Secretary Lori Chavez-DeRemer and the leaders at the Bureau of Labor Statistics and Census Bureau to expand federal data collection efforts on AI's impact on the economy and jobs.
- Sens. Mark Warner (D-Va.), Josh Hawley (R-Mo.), Jim Banks (R-Ind.), Maggie Hassan (D-N.H.), Mark Kelly (D-Ariz.), Tim Kaine (D-Va.), John Hickenlooper (D-Colo.), Todd Young (R-Ind.) and Mike Rounds (R-S.D.) signed the letter.
What they're saying: "As it stands, the federal government's statistical agencies' data, research, and measurement on artificial intelligence significantly lags behind non-governmental labor market data," the letter states.
Zoom in: The senators are urging agencies to add AI-focused questions to the survey that underpins the monthly jobs report and publish more public reports.
The bottom line: Lawmakers want hard data as they search for ways to respond to AI's impact on jobs.
5. House GOP advances kids' online safety package
House Energy and Commerce Committee Republicans advanced a kids' online safety package with guardrails for AI chatbots at a markup yesterday after bipartisan negotiations broke down.
The big picture: House Republicans moved legislation that Democrats say misses the mark on protecting kids online and doesn't align with bipartisan Senate-passed proposals.
The intrigue: Democrats said the House GOP package — which includes a version of the Kids Online Safety Act — contains preemption language that would limit states' ability to pass stronger laws to protect kids.
- The package also omits a "duty of care" that would require platforms to take reasonable steps to mitigate harms stemming from design features like endless scroll or algorithmic recommendations.
Zoom in: The package includes the SAFE BOTs Act, which focuses on AI interactions with kids.
6. The Output: U.K. AI, Gemini lawsuit and more
Here's our guide to catch you up on the AI policy news you may have missed this week:
⚖️ Gemini lawsuit: A wrongful-death lawsuit alleges that Google's chatbot contributed to a man's suicide, per the Wall Street Journal.
🇬🇧 Britain boosts AI: The U.K. is launching a new frontier AI research lab backed by £40 million in government funding over the next six years.
⛈️ AI weather overhaul: The National Oceanic and Atmospheric Administration is planning an internal overhaul focused on AI-powered weather models "while reorganizing into a leaner structure in the months ahead," per Bloomberg.
🏛️ Google faces inquiry: Sen. Josh Hawley (R-Mo.) yesterday announced the Senate Judiciary Subcommittee on Crime and Counterterrorism is opening an investigation into Google for failing to remove child sex abuse material.
- Hawley sent Alphabet CEO Sundar Pichai a letter asking for information and documents by March 18.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government









