Axios AI+ Government

April 10, 2026
Morning! AI leaders are pitching big policy ideas, but they're running into Washington reality.
Today's newsletter is 1,341 words, a 5-minute read.
1 big thing: What AI CEOs still don't get about Washington
AI CEOs' lofty pitches for AI governance may end up being pipe dreams in a town that routinely fumbles tech policy.
Why it matters: From OpenAI's Sam Altman to Anthropic's Dario Amodei, high-profile AI executives are eager to shape how their products are regulated and encouraged, rolling out sweeping policy ideas to manage the technology's impact.
- But Congress — on privacy, social media and now AI — has a history of getting stuck in the policy weeds, and lawmakers are now grappling with heavy lobbying and growing constituent demands on the future of the tech.
Catch up quick: OpenAI's new industrial policy paper describes AI changing the world on a scale similar to the Industrial Revolution, requiring aggressive policies like tax reform or a four-day workweek, while others have been floated by progressives for decades, such as boosting child care.
- Anthropic's policy ideas have skewed more toward internal governance and transparency, such as economic audits to determine AI's impact on jobs, along with stricter export controls and greater government evaluation of AI systems.
Behind the scenes: The architects of these proposals aren't new to the policy world.
- OpenAI chief global affairs officer Chris Lehane has long argued for redistributing the gains of new technologies, from pitching a "new deal" for crypto to promoting policies that would spread AI's economic gains more broadly.
- Anthropic meanwhile is ramping up its D.C. presence under public policy head Sarah Heck, who was previously at Stripe and worked on global entrepreneurship and public diplomacy at the White House National Security Council under former President Obama.
What they're saying: Lehane, a longtime political operative, told Axios that OpenAI is focused on promoting these policies at the state level, where there's a higher chance of success, especially in an election year when voters want to ensure they benefit from AI.
- "There is one truism in every campaign, which is, every politician says they lead, but what they typically do is they follow where the voters are, and they will move very quickly if they see voter sentiment on it," Lehane said.
- "We know the majority of Americans want government to take action on these issues," Heck said, pointing to Anthropic's policy positions on model transparency, economic impacts and energy.
Anthropic has also backed state-level AI transparency bills to mitigate the technology's biggest risks while calling for a federal standard, saying that transparency is the first step to give policymakers and the public visibility into how systems are developed.
The big picture: Silicon Valley and Washington are often speaking different languages: One moves fast and breaks things, while the other moves slowly — if at all.
- "Both coasts think that they're in charge," Nand Mulchandani, former chief technology officer of the CIA and of the Pentagon's Joint Artificial Intelligence Center, told Ashley in an interview at the HumanX conference in San Francisco this week.
- "But Silicon Valley now has power rivaling the power of what a government has. What we're seeing now is the first large fight over who's driving the bus."
While the AI industry has allies in the White House, the Trump administration has also run into limits in Washington.
- Efforts to preempt state-level action, for example, have repeatedly failed, and the White House's most recent AI framework proposal for Congress faces an uphill climb.
The bottom line: AI companies can float sweeping policy ideas knowing they're unlikely to go anywhere, and still claim they warned Washington.
2. Scoop: White House leans on GOP states
The Trump administration is pushing back on Republican-led AI bills in Nebraska and Tennessee, with sources familiar with the negotiations describing the outreach as pressure to weaken or abandon the efforts.
Why it matters: This behind-the-scenes push puts GOP state lawmakers who support AI guardrails but don't want to cross the White House in a tough position.
- It's happening as federal safeguards remain stalled in Congress, despite growing public support for regulation.
Behind the scenes: White House officials spoke to lawmakers in each state to push for changes, and some sources cast the outreach as an inappropriate pressure tactic.
- "It's important that we let the public know that we have unelected bureaucrats weighing in on issues they shouldn't be," one Republican state legislator told Axios.
- This comes after the White House Office of Intergovernmental Affairs sent a letter to Utah officials opposing a Republican-led AI transparency and kids' safety bill there, as Axios previously reported.
The other side: "We are proud of the President's National AI Framework. The Trump Administration is eager to work with partners who will help us implement that policy and achieve a comprehensive AI framework that serves all Americans," a White House official told Axios in a statement.
- These bills were part of broader conversations between administration and state officials about the president's policy priorities, a source familiar with the matter told Axios.
Zoom in: The bills discussed in Nebraska and Tennessee originally mirrored AI transparency measures in states like California and New York.
- In Nebraska, LB 1083, Republican state Sen. Tanya Storer's Adopt the Transparency in Artificial Intelligence Risk Management Act focused on AI risk management and transparency requirements. Storer did not respond to requests for comment.
- In Tennessee, SB 2171, Republican state Sen. Ken Yager's Artificial Intelligence Public Safety and Child Protection Transparency Act would impose safety and transparency measures for AI companies with protections for younger users. Yager did not respond to requests for comment.
"This bill was amended at the suggestion of the White House," Yager said during an April 7 committee hearing, adding a phone call that morning led to an amendment that would "delete some portions of the bill."
- "It is not a broad AI regulation bill," Yager said. "It does not regulate an entity simply because they are a frontier developer. Only chatbots."
- The Nebraska bill was also amended to be narrowly focused on chatbots.
The bottom line: After multiple failed attempts to get Congress to override state AI laws, the White House is shifting its strategy to direct intervention, playing whack-a-mole across GOP state legislatures.
3. Inside Europe's AI playbook
A top European AI policy official pushed back on a common critique that the EU's tech regulation stifles innovation in an interview with Ashley at the HumanX conference in San Francisco this week.
Why it matters: Europe is hoping to attract tech innovation and investment by highlighting its stable regulatory environment that may be more restrictive, but is consistent across all EU member states.
Driving the news: Magnus Brunner, who works on AI in his role as European commissioner for internal affairs and migration, said Europe's sweeping AI Act provides "guardrails" needed to build trust, even as the U.S. takes a more fragmented, state-by-state approach.
The big picture: Tech executives and government leaders on AI in the U.S. often call out Europe's tech laws — such as the AI Act, the Digital Services Act, and the Digital Markets Act — as burdensome and hurtful to American tech companies.
- AI and tech companies often have to change their product offerings and follow strict rules to be able to operate in the EU.
- Brunner said Europe can be inflexible and slow, but it offers regulatory certainty while companies navigate confusion and a patchwork of laws in the U.S.
4. Florida AG launches investigation into OpenAI
Florida Attorney General James Uthmeier announced yesterday that his office has launched an investigation into OpenAI, citing national security concerns and ChatGPT's alleged role in a mass shooting at Florida State University last year.
Why it matters: The probe could intensify efforts to hold AI companies accountable for how their chatbots are used.
- That could lead to more formal legal scrutiny and possible regulation just as many of these companies are rumored to IPO.
What they're saying: "AI should advance mankind, not destroy it. We're demanding answers on OpenAI's activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting," Uthmeier said in a video posted on X.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government







