Axios AI+ Government

April 03, 2026
It's Friday! We've got a look at how California is racing ahead on AI rules, becoming the testing ground for AI governance as the Trump administration pushes back on state efforts.
Today's newsletter is 1,417 words, a 5.5-minute read.
1 big thing: California becomes the testing ground for AI rules
To see where tech policy is going in the U.S., look west: California is escalating its push to regulate AI across multiple fronts.
Why it matters: California's multipronged approach makes it likely that AI companies in the U.S. will treat the state's rules as a de facto national standard, even as the White House moves to rein in state regulation.
- It follows a familiar pattern: California acts first, companies adapt to keep doing business there, and Congress dithers, eventually ceding its role to states due to gridlock.
Driving the news: Gov. Gavin Newsom signed an AI executive order this week as state legislators advance a number of AI bills and consider other regulatory avenues for AI.
The big picture: California is moving ahead as the Trump administration pushes for a national AI standard that would preempt nearly all state-level AI laws.
- The White House last month unveiled its AI legislative framework, essentially a wish list for an elusive bill from a divided Congress.
- Meanwhile, Newsom, a 2028 Democratic presidential contender, is positioning himself as the inverse of President Trump on AI.
Still, the state is hardly immune to Big Tech influence even as it manages to pass tech legislation.
- OpenAI and Anthropic have been highly involved in pushing various bills and ballot initiatives, often pairing with online safety groups to do so, with mixed results.
What they're saying: "California's always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk," Newsom said in a statement about the executive order.
- "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way."
- Google and Anthropic declined to comment on the order. OpenAI said in a statement that "we are glad to see Governor Newsom continuing to lead on AI so California can continue to lead the world on AI."
- A White House official told Axios that the administration is "proud" of its AI framework and "happy to engage with legislation that is consistent with the framework."
How it works: Newsom's AI order aims to "raise the bar for AI companies seeking to do business with the state," per the announcement, and makes procurement standards stronger.
- The state will develop a plan for contracting best practices requiring companies to explain their policies on distribution of illegal content, model bias and violation of civil rights and free speech.
- In a clear shot at the Pentagon-Anthropic dispute, the order also enables California to "separate the procurement authorization process from the federal government's if needed," per the release.
Lawmakers in the California State Assembly and Senate have also introduced a sweeping AI chatbot bill for protecting minors that would build on a chatbot law already in effect.
- "While Washington steps back from its responsibility to protect Americans from AI harms, California is stepping up on every front," Assemblymember Rebecca Bauer-Kahan told Ashley.
- "We can lead the world in AI and still demand that it works for people, not against them."
What we're watching: Multiple AI and tech policy sources told Axios that Newsom's executive order itself may lack strong legal teeth, but it will end up influencing company policies because they all want to do business with California.
- "What's notable here is California continuing to use procurement as a policy lever," said Joseph Hoefer, principal and chief AI officer at public affairs firm Monument Advocacy.
- "If you want access to the world's fourth-largest economy, you're going to need to demonstrate baseline responsible AI practices. That's a pretty powerful signal to the market."
2. Trump administration appeals Anthropic ruling
The Trump administration is appealing a federal judge's order temporarily blocking the Pentagon's ban on Anthropic, per a filing yesterday.
Why it matters: The appeal escalates a high-stakes fight that could reshape how the government works with AI companies.
- The Trump administration isn't giving up its fight against Anthropic anytime soon, even as there's been chatter that the deal could get revived.
Driving the news: Government lawyers for the Justice Department filed a notice of appeal to the U.S. Court of Appeals for the Ninth Circuit on behalf of the Pentagon.
- Anthropic and the Pentagon have been at odds for months after a deal for the government to use Claude collapsed over the company's red lines for the technology.
Context: A federal judge last week temporarily paused the Trump administration's designation of Anthropic as a supply chain risk in an early legal win for the company.
- Anthropic said the designation was causing immediate and irreparable harm as business partners rethink their contracts and federal agencies remove Claude.
- A parallel case is ongoing in a D.C. court.
- Anthropic is arguing in both proceedings that the Pentagon is violating the First Amendment and procurement law, while the Defense Department says that the dispute is about the military's ability to use the technology, not speech.
3. U.S. kicks off push to sell AI abroad
The Commerce Department this week opened a call for proposals to help U.S. companies bundle and export end-to-end AI systems to international markets.
Why it matters: The Trump administration's AI strategy is based partly on a bet that the best way to win the AI race is to embed U.S. tech deep inside other countries' digital infrastructure.
Driving the news: U.S. companies can now submit proposals to "deliver full-stack American AI technology packages to international partners," per an announcement from Commerce. Applications are open through June 30.
- Companies approved for the program created by President Trump's executive order will be promised government financial incentives that could give them an edge in the global AI race.
The big picture: As countries push for AI sovereignty — the ability to control the use, development and regulation of AI — Commerce is positioning the AI exports program as a way to deliver that on U.S. terms, officials told Axios.
- The program is designed to be flexible, officials said, and to allow foreign partners to maintain control of their own data and infrastructure.
How it works: Companies will team up to submit proposals to pitch bundled AI systems including chips, data pipelines, models and security to foreign markets.
- Commerce won't use a set scoring system or checklist to rank proposals, per a release. It will require companies to include a statement "describing how the proposal advances U.S. national interests."
4. Anthropic employees bet on midterms
Anthropic is launching a corporate PAC, following the path of other tech companies that operate similar employee-funded PACs.
Why it matters: 2026 is shaping up to be a huge year for political spending aimed at influencing AI policy.
Driving the news: Anthropic is planning to establish AnthroPAC, the company announced on Friday. It will be funded through voluntary employee contributions capped at $5,000 per person annually under federal election law.
- It will be overseen by a bipartisan board and disclose its activity through FEC filings.
- AnthroPAC is expected to support federal candidates in both parties who are involved in AI policy.
Reality check: Corporate PACs are strictly regulated and funded by voluntary contributions from employees. Anthropic, and other companies with these types of PACs, can't directly contribute.
The big picture: The move comes as the company ramps up its D.C. presence amid intensifying fights over AI regulation and its Pentagon contract.
- Anthropic has made a broader push into political spending, including a $20 million donation to a bipartisan advocacy group, Public Action First, focused on AI safeguards and transparency.
- Companies like Google, Microsoft, Amazon and Meta all operate similar PACs funded by employees.
5. The Output: Data centers, procurement and more
Here's our guide to catch you up on the AI policy news you may have missed this week:
🛑 Maine moratorium: Maine is moving to temporarily ban large new data center construction until 2027 to study the impact on the environment and power grid, the Wall Street Journal reports.
🔦 Procurement rules spotlight: The Business Software Alliance is urging the General Services Administration to make major changes to its federal AI procurement proposal, including narrowing the prohibition on buying foreign AI systems.
📱 Age verification bill: Rep. Josh Gottheimer (D-N.J.) yesterday rolled out the bipartisan Parents Decide Act, which would require developers like Google and Apple to verify users' ages when a device is set up and let parents set controls.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government







