Axios AI+ Government

October 10, 2025
Good morning ... it's Friday, and we're here with the latest news on AI policy. Just start scrolling!
Today's newsletter is 1,645 words, a 6-minute read.
1 big thing: States are making their own rules for AI
States across the country are forging ahead with their own rules for AI procurement and use in an effort to boost government efficiency and improve public services.
Why it matters: With Congress stalled on comprehensive AI legislation, several states are providing early examples of how governments can use the technology at scale.
Ashley spoke to a number of state chief information, data and AI officers across the country to get a taste of what they're doing. Here's a sampling of what we found:
Vermont: Human-centered AI is the core of Vermont's approach, Denise Reilly-Hughes, state CIO and secretary of the Vermont Agency of Digital Services, told Ashley.
- Vermont's AI Commission created a code of ethics that guides how state employees use the technology in their work, Reilly-Hughes said, and they've kept it nimble with the "understanding that those guardrails will have to be continuously challenged."
- The state has created a "pilot factory" for different AI applications across state government.
- One area where Reilly-Hughes would caution against AI use is in critical decision-making with human impact.
New Jersey: Dave Cole, the state's chief innovation officer, told Ashley that it's been full speed ahead on generative AI since 2023, when Gov. Phil Murphy issued an executive order charging the state government with finding ways it can improve services for residents and boost the economy.
- The state government launched the NJ AI Assistant, and 20% of the state workforce has used the system in the last year, Cole said.
- New Jersey has contracts with Microsoft, Google and Amazon Web Services, and employees have been evaluating how each AI system works for the problem they're trying to solve.
- The state hasn't pursued any chatbots for the public yet, Cole said, because of the potential to give residents false information: "We want to hold the line on quality and really get to a very high bar."
- Cole said one notable AI case study for the state involved his office working with the state agriculture department to collect data and use AI to identify children who were eligible for a summer food benefit. It identified more than 100,000 benefit recipients who then got automatically enrolled in the program.
North Dakota: It's early days for AI experimentation in North Dakota, and that's partly due to the state's every-other-year legislature schedule and a lack of new funding specifically for AI implementation, Corey Mock, the state's CIO, told Ashley.
- Mock said state employees are using Microsoft Copilot and testing enhanced search engines for potential public use.
- "We have not rolled out anything public facing ... that would require more guardrails for deployment."
Pennsylvania: The Keystone State kicked off a pilot with ChatGPT Enterprise for employee use last January, the first state in the U.S. to do so, and plans to expand it, per Gov. Josh Shapiro's communications director, Dan Egan.
- Any further AI tool procurement will follow Shapiro's 2023 executive order on AI in state government, which focuses on accuracy, transparency, fairness, security and employee empowerment, Egan told Ashley in an email.
- More than a dozen Pennsylvania agencies have tested AI for draft communications, summarizing public feedback and analyzing permitting data, with employees reporting time savings, Egan said.
We'll be back next week with part two in our series examining how states are using AI.
2. How Trump's planned AI exports program works
Companies are gearing up to join a Commerce Department program designed to supercharge U.S. AI exports.
Why it matters: The Trump administration wants U.S. tech in the hands of allies to strengthen its global competitive edge and counter China.
State of play: The administration has until Oct. 21 to set up a program to support "full-stack AI export packages" per an executive order.
- Companies across the AI ecosystem are expected to come together and offer proposals for the infrastructure, tools and models they want the government to designate as "priority" AI export packages.
- Companies that are accepted into the program would get federal loans, government investment and expedited licensing.
- But the timeline laid out in the executive order could be delayed by the shutdown.
The government would dedicate diplomatic resources to make sure the U.S. is involved in multilateral AI efforts and partnerships with specific countries exporters want to target, like the recently signed tech pact with the U.K.
- The Commerce Department, the State Department and the White House Office of Science and Technology Policy are tasked with standing up what will be called the "American AI Exports Program."
How it works: A so-called full stack is made up of various components, including:
- Hardware. Think Nvidia's or AMD's chips, Dell's servers and Intel's accelerators.
- Data center storage. Like IBM's offerings and cloud services from Amazon or Microsoft.
- Data pipelines and labeling systems. These are tools that companies like Meta and Scale AI are working on that move or annotate data such as images, text or video.
- AI models. For example, OpenAI's ChatGPT, Google's Gemini or Anthropic's Claude.
- AI apps for specific use cases. These could be focused on sectors ranging from health care and agriculture to transportation or finance.
3. Exclusive: AI startups find a D.C. advocate
HumanX and Humanrace Capital have tapped Rep. Jay Obernolte (R-Calif.) to launch the AI Coalition, a nonprofit meant to help smaller AI companies and startups get access to Washington, the group exclusively told Ashley.
Why it matters: Without a voice in D.C., startups fear they could be sidelined by the heavy compliance costs of inconsistent state laws or onerous federal rules.
- Obernolte, a leading House Republican on AI who has pushed for a moratorium on state-level AI laws, brings some heft to the project.
- HumanX hosts AI conferences, and Humanrace Capital is a VC fund that focuses on regulated sectors.
Driving the news: The group is calling itself "the first nonprofit specifically designed to provide early-stage AI companies, from pre-seed through Series B, with meaningful access to policymakers and influence over the regulations that will define the next decade of innovation," per a release shared with Axios.
- The group will be registered as both a 501(c)(3) and 501 (c)(6) for both education and lobbying, said HumanX CEO Stefan Weitz, and cost members $1,750 to $3,500 monthly.
What they're saying: Obernolte, who chaired the House AI Task Force, said "it became abundantly clear that the voices of smaller companies and entrepreneurs" were missing at hearings on AI on Capitol Hill.
- "Their voices are critically important in making sure we make the right decisions on AI regulation," Obernolte told Axios in an interview.
- "If you're a big company, you have the resources to communicate with the government. If you're an entrepreneur, it's the furthest thing from your mind," he said.
- Obernolte wants the group to be a resource for lawmakers "as we develop the framework for federal regulation of artificial intelligence."
4. Scoop: Anthropic's national security plan
Anthropic is looking to expand how its AI models can be used by the government for national security purposes, a source familiar told Maria.
Why it matters: The Trump administration is focused on supercharging government adoption of AI, and Anthropic's moves aim to serve that.
- But the government needs to balance the use of AI to protect against foreign threats with the handling of sensitive data and classified work.
Behind the scenes: For months, Anthropic has been thinking through how its policies should be adjusted as frontier AI capabilities and reliability across the industry have improved, the source said.
- That progress opened up the possibility for national security use cases to be expanded in a safe way and boost government adoption, per the source.
- A company spokesperson did not respond to a request for comment.
Anthropic is planning on expanding its policies in four ways:
1. Customers like the Defense Department would be able to use Anthropic's Claude Gov models to deploy and conduct cyber operations, with a human in the loop.
- Right now, Claude is just used for tasks like cyber threat analysis, the source familiar said.
2. Claude would be enabled to make recommendations about foreign intelligence that's collected, beyond just analyzing the intelligence.
3. Customers would be able to generate content for military purposes, such as simulating war gaming scenarios or creating training materials for military and intelligence officers.
4. Anthropic would also offer sandbox environments for customers to explore potential future uses — a practice that was restricted before.
5. OpenAI says GPT-5 is its least biased model
OpenAI's GPT-5 model exhibits lower levels of political bias than any previous models, according to new research from the company shared with Axios.
Why it matters: Critics of AI systems and politicians on both sides of the aisle have called for AI transparency and proof that models are not biased.
- An executive order from July aims to root out "woke" AI systems from being used by the government, but how companies could comply with that hasn't been clear.
Driving the news: Per new findings from OpenAI researchers, GPT-5 in both "instant" and "thinking" modes has reduced bias by 30% compared with previous models.
- "Our models stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts," the OpenAI paper says.
What they're saying: "Charged" prompts elicited the most biased results from the model, and there is room for improvement in model objectivity, OpenAI researchers told Ashley in an interview.
6. Exclusive: Bessent to keynote AI summit
Treasury Secretary Scott Bessent will keynote an AI summit on Oct. 21 hosted by the newly formed Prometheus Initiative, according to an invitation shared with Axios.
Why it matters: Government officials and industry players are eager to tout the societal benefits they're expecting from AI and quell concerns around job displacement, disinformation and more.
- The invitation calls the event the inaugural "AI Summit on American Prosperity."
Read the full story from earlier this week here.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government








