Axios AI+ Government

September 19, 2025
Good morning! Thanks for joining us for this Friday's edition of AI+ Government, our new weekly newsletter focusing on how governments encourage, regulate and use AI.
- Let us know your top AI policy questions — just click reply to drop us a note.
Today's newsletter is 1,576 words, a 6-minute read.
1 big thing: Democrats' emerging AI playbook
With the midterms just over a year away, Democrats are sharpening an AI message focused on how the technology could widen economic divides and harm workers.
Why it matters: Democrats did little to put guardrails on AI when they had control of Congress. They're now decrying Republicans' hands-off approach to regulation and coming up with messaging of their own.
Driving the news: Democrats are zeroing in on how to help workers displaced by AI, though the details haven't been fleshed out.
- Maria Cantwell (D-Wash.), the ranking member of the Senate Commerce Committee, has long focused on workforce development and plans to build upon efforts like the NSF AI Education Act, a bipartisan bill aimed at advancing STEM learning, her staffer told Maria.
What they're saying: At this week's AI+ DC Summit, Rep. Ro Khanna (D-Calif.) said that AI should benefit everyone, not just companies.
- "The Democrats should say our vision is AI that actually helps tackle the economic divides. ... Our job as politicians should be to make sure that the AI future is one that has the economic future of every family and community," Khanna told Ashley.
- "If you're going to have increased worker productivity, then workers should have a role in the share of those profits."
Sen. Mark Kelly (D-Ariz.) this week released his "AI for America" plan, laying out his idea for what he calls the AI Horizon Fund.
- The trust fund — paid for by tech companies — would support union-led apprenticeships and coordinate state and federal efforts for workers' development.
- "The biggest thing is coming up with a plan for how you're going to retrain people for other jobs," Kelly told Axios' Ina Fried at the AI+ Summit, noting that he has no immediate plans to introduce legislation.
- "We do not want to find ourselves in a situation where there are 10 million people that lost their jobs through AI and they don't have a good option. That's not good for anyone," he said.
Catch up quick: The Democrats' comments echo the strategy that moderate House Democrats rolled out in July, which centered on middle-class American workers.
The other side: Republicans are focused on staying out of the way.
- Ted Cruz (R-Texas), the chair of the Senate Commerce Committee, recently introduced the AI SANDBOX Act, which would give developers space to test AI "without being held back by outdated or inflexible federal rules," his office has said.
- Cantwell's staffer said the bill is overly broad, as it gives too much authority to the Office of Science and Technology Policy to determine which federal rules are overly burdensome, but added that Cantwell plans to work with Cruz to narrow the bill down.
- Cruz also told Ashley this week that he wants to revive his effort to kill state-level AI regulation.
In the House, Rep. Jay Obernolte (R-Calif.) is working on legislation that incorporates recommendations from the House AI Task Force report released last December, his staffer told Maria.
- "The bill will help shape the national framework for how AI policy is handled going forward, with the goal of keeping the United States in the lead on this critical technology," the staffer said.
The bottom line: Upskilling, retraining and apprenticeship programs are nothing new for tech transitions — the CHIPS Act, for instance, included a variety of workforce initiatives that are now uncertain under President Trump.
- Whether it's taxes, a trust fund or something else, Democrats will be looking to get aligned on the details of their proposals ahead of the midterms.
2. Exclusive: Google's patent office wish list
Companies like Google should pay up front for their patent applications as AI supercharges the work of inventors, Google general counsel Halimah DeLaine Prado told Ashley in an interview.
Why it matters: Google is sending a signal to John Squires, the newly confirmed leader of the U.S. Patent and Trademark Office.
- The tech giant believes that up-front payments from companies that file for high numbers of patents would help USPTO boost its resources so it can review more applications.
By the numbers: In the past year, 17% of Google's inventions were created with the help of AI, DeLaine Prado said.
- Google also holds the most patents related to AI as of May of this year, Axios previously reported.
What they're saying: AI has allowed inventors to increase their number of new works and filings to the patent office, yet the agency hasn't changed the way it operates, DeLaine Prado said.
- "That's not because of bad decisions, that's just numbers and math. We're at an important inflection point where AI is the thing that is driving the increased number of patents, but AI is also a tool that can be used to help review those patents," she said.
- "You absolutely need to have large filers pay up-front fees for their patents, particularly when they are complex," DeLaine Prado said.
How it works: Currently, filers pay on a schedule. "The idea is not to expect to go to appropriations to look for money, but actually use the innovators to pay into that system," DeLaine Prado said.
- She said that smaller businesses and individual inventors would benefit since a well-staffed USPTO office could review their patents faster, too.
What's next: DeLaine Prado is also advocating for Squires to encourage patent examiners to use AI even more in their work, and to streamline the process that allows people to challenge patents granted by the agency.
- "We're not trying to suggest something that is, you know, earth shattering, but actually could move the needle and further protect American innovation and to do so in an efficient way," she said.
3. Axios interview: Helen Toner
Helen Toner, who made waves in the AI world as one of the leaders of the failed effort to oust OpenAI's Sam Altman, is taking the helm of a major D.C. tank tank aimed at engaging policymakers on the "fierce debates" around AI.
The big picture: Toner was recently appointed as the interim executive director of Georgetown University's Center for Security and Emerging Technology, which she says is diving deep into what AI "means for society."
This interview has been edited and condensed for clarity.
What should lawmakers in D.C. focus on if they're serious about regulating AI?
The AI policy landscape in general is very fractured and has a lot of disagreement, but something that a lot of people can agree on is that it would be much better to have more transparency and more visibility into these cutting-edge companies.
- That is something that Congress actually can do by just inviting executives to come to hearings and testify.
- That's an important power Congress has: to ask questions. What technologies are they developing? How are they testing them? What is the rate of improvement that they're seeing? What kind of risks are they measuring for and what results are they getting?
AI companies are staffing up in D.C. more than ever — what does it mean?
It is a real benefit that they deeply understand the technology, and that they can make sure that proposals that are on the table are actually realistic.
- At the same time, they clearly have a different set of incentives than what we would hope our elected officials and other policymakers are optimizing for, namely the broad public benefit and American interest writ large, rather than the specific pocketbooks of the specific companies.
- There's often a disconnect between what the frontier AI company policy teams seem to be saying and thinking about the technology and what their own researchers and engineers are thinking and saying.
What's the biggest AI risk now compared to a few years ago?
A huge one that is just starting to be taken more seriously used to be this sort of dark matter of AI policy, which is AI companions and people building relationships with AI systems.
- It used to be something you could not really talk about in polite company.
- But as we're starting to see issues around mental health and dependency and some really tragic stories, I think it makes sense to be paying more attention to those issues.
4. The Output: Chatbots, China and more
Here's our guide to catch you up on the AI policy news you may have missed this week:
🇨🇳 China watch: U.S. leaders in government and business agree that the U.S. must win an AI race with China — but that's where the consensus ends, our Axios colleague Scott Rosenberg reported from this week's AI+ DC Summit.
💬 Chatbots spotlight: Parents of children who died by suicide or self-harmed after talking to AI chatbots urged Congress this week to take action, Ashley wrote.
🏛️ Hill AI: The House of Representatives will start using Microsoft Copilot in an effort to modernize the chamber and embrace AI, according to an announcement shared first with Maria.
🪖 Pentagon play: Maria had another exclusive this week, with a first look at data labeling company Scale AI making its platform available to the Defense Department in a contract worth up to $100 million.
🇬🇧 London calling: U.S. tech companies rolled out major investments into U.K. AI infrastructure as President Trump kicked off his state visit to London this week.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government






