Axios AI+ Government

December 05, 2025
It's Friday! Thanks for joining us at yesterday's AI+ Summit in San Francisco — if you missed it, check out our coverage below.
Today's newsletter is 1,210 words, a 4.5-minute read.
1 big thing: What Trump could do next on state AI laws
President Trump's push to ban state AI laws failed on Capitol Hill this week, raising the stakes for what the White House and AI czar David Sacks could try next.
Why it matters: Despite the administration's pressure campaign, lawmakers rejected including preemption language in the annual defense policy bill — but the White House isn't looking to take no for an answer.
- Three sources familiar with the matter say that an executive order to preempt state AI laws that the White House floated in November is now back in play.
- It's not clear yet whether the content of the possible executive order would be the same as the previously leaked draft.
- The White House did not respond to a request for comment.
Catch up quick: This week marked the second major defeat this year in the White House's bid to reshape the AI policy landscape through Congress.
- Another attempt collapsed earlier this year on the Hill, with senators stripping a similar provision from the budget bill this summer in a 99-1 vote.
- The preemption proposal to override most state-level AI laws without any additional federal regulatory framework isn't in the final version of the annual defense policy bill, per House Republican leadership.
- House Majority Leader Steve Scalise (R-La.) told reporters this week that the National Defense Authorization Act wasn't the "best place" for it, but Republicans would be looking for other, unspecified ways to advance the measure.
The big picture: It's a significant loss for Trump and Sacks, and could set the stage for aggressive executive action aimed at gutting state AI laws.
Between the lines: This is dividing Republicans. It's widely opposed by key MAGA figures, state lawmakers, attorneys general and members of Congress, even though Trump publicly backed a ban.
- Steve Bannon ally Joe Allen commented on the potential plans to revive a preemption EO, posting that "Handing the reins of AI to tech corps and an ineffective Congress would be a disaster for the President."
What we're watching: The White House in November floated a possible executive order to override state AI laws by launching legal challenges and conditioning federal grants.
- The draft executive order seen by Axios calls for aggressive action, tasking the attorney general with establishing an "AI Litigation Task Force" within 30 days to challenge state AI laws.
- This approach would have far less teeth than legislation, and would face legal scrutiny.
- If Trump turns to this EO after this failed bid with Congress, it would mark a sharp escalation in the administration's effort to centralize and accelerate U.S. AI policy.
2. How AI is changing the world of HR
Your human resources officers are probably using AI for a lot of jobs — and they're also finding that human resources is one of the riskiest ways to use it.
Why it matters: Managing employees at work is becoming another space where machines may be making sensitive determinations previously left up to professionals.
What they're saying: Experts tell Axios AI is a great tool for HR professionals to experiment with, but they must remember AI still hallucinates and can be unreliable.
- "If you're doing anything consequential, like drawing conclusions about performance, or God forbid anything about salaries or things like that, you would absolutely want to make sure you go in and check that the AI is correct," said Helen Toner, interim executive director of Georgetown's Center for Security and Emerging Technology.
- For compensation and promotion decisions using solely AI, "You're seeing many organizations saying, 'OK, let's put a pause on that. It should at least have a human in the loop,'" said Alex Alonso, chief knowledge officer at the Society for Human Resource Management.
By the numbers: Per research from SHRM shared with Axios, recruiting is the top HR area where organizations are using AI.
- According to SHRM data, 65% of HR professionals surveyed use AI at work, compared with 45% of the general workforce.
- Of those who use it, 9 in 10 HR professionals say they're relying on it to generate job descriptions or screen applicants.
- HR AI tools are most effective for performance management and skills assessments, Alonso said.
3. Scott Wiener on AI's workforce impact
California state Sen. Scott Wiener yesterday warned that all levels of government haven't grappled with the enormous workforce challenges of artificial intelligence.
Why it matters: Wiener is running for the congressional seat held by Rep. Nancy Pelosi (D-Calif.), and he's positioning himself as one of the country's most active lawmakers on AI policy.
What they're saying: "Just telling people, 'Oh, don't worry, we'll retrain you,' to a 53-year-old accountant whose job has just evaporated, that's a tough thing," Wiener told Ashley at Axios' AI+ Summit in San Francisco.
- "Human beings were not, in my view, designed to just not do anything. And while some people whose jobs are phased out, even if they have some level of income, will go on to do creative, great things, it can lead to other societal problems," he said.
- "So that is, it's a huge issue, it's one that we have not grappled with, and it's happening really fast."
As for whether tech and AI policy will be part of his campaign for Pelosi's seat, Wiener said he believes it's "something that people care about" to an extent.
- "It is not the same intensity as [the] cost of housing and cost of health care and secret police grabbing your neighbors and sending them to gulags in El Salvador."
- "Those things resonate at probably a more intense level. But people still care, even though it might not always be the thing they're voting on," Wiener said.
Context: Wiener authored a major California AI bill that Gov. Gavin Newsom signed into law this year to mandate transparency measures from frontier AI companies.
- The Transparency in Frontier Artificial Intelligence Act requires large AI developers to make public disclosures about safety protocols and report safety incidents.
- It also creates whistleblower protections and expands cloud computing access for smaller developers and researchers.
- "In the absence of federal action, I think there's an awareness that California has a huge role to play," he said.
State of play: The law has major implications for the country's biggest AI players — and underscores the appetite to regulate the technology at the state level.
4. Transformative AI is coming, as are the risks
The holy grail of technology — artificial general intelligence that can match or outdo humans — is on the horizon, Google AI guru Demis Hassabis says.
- But the risks of something going seriously wrong are also in sight, and some are even happening now, he warns.
The big picture: Google set the entire AI world spinning in recent months with the giant leaps in its frontier model Gemini, prompting a "code red" at OpenAI and forcing others to rethink the competitive landscape.
- But Hassabis and his DeepMind team are already thinking far, far ahead.
"We're definitely not there now" in terms of AGI, Hassabis said in an interview with Axios' Mike Allen at the Axios AI+ Summit.
- "Quite close. I think we're like 5 to 10 years away if you were to ask me," he said.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government







