Axios AI+ Government

October 03, 2025
It's Friday. That means it's time for AI+ Government, our weekly look at how lawmakers encourage, regulate and use AI. Thanks for reading!
Today's newsletter is 1,630 words, a 6-minute read.
1 big thing: How a wonky Commerce rule could disrupt AI companies
An export control rule from the Commerce Department is set to create a major new layer of work for AI and tech companies with global supply chains and partners.
Why it matters: The move is a clear win for national security hawks who want U.S. tech firms to retreat from business partnerships with companies in China and other countries deemed to be national security risks.
- The compliance challenges for businesses could be significant, some observers warn.
Driving the news: Commerce's Bureau of Industry and Security issued the interim final "50% rule," which extends licensing restrictions to subsidiaries of entities on U.S. export control and sanctions lists, earlier this week.
- As the name implies, the rule applies to companies more than half owned by one or more restricted entities.
- The Commerce Department says that this closes a loophole that allowed subsidiary businesses of listed companies to evade export restrictions.
What they're saying: "By flipping this kind of light switch on, you're automatically prohibiting some of the most egregious violators of U.S. export control laws over the last several years, companies that operate all over the world, not just in China," Kit Conklin, a former senior adviser to the House Select Committee on China and now a senior vice president at Exiger, told Axios.
- Industry has known this move was coming, he said, and it provides necessary clarity for companies trying to compete globally.
The other side: Some are warning that the 50% rule is too severe for companies to comply with while they're trying to accelerate on AI.
- "The irony is that this is happening at the very moment the administration is calling for an 'all-in' push on artificial intelligence and advanced technology," said Joseph Hoefer, principal and AI policy lead at Monument Advocacy, which represents tech firms.
- "If American firms are tied up chasing ownership records and watching license applications get returned 'without action,' the U.S. could end up stifling its own innovation while adversaries with looser regimes move forward."
Whether the rule is effective in protecting national security will depend on two factors, said Thea Kendler, former Commerce assistant secretary for export administration.
- One is whether companies, especially medium and small ones, can handle the compliance demands of this new export control rule.
- "You now have a whole new bunch of due diligence that you have to do regardless of whether you're in a sensitive industry, regardless of whether you're dealing with China," Kendler said.
- Two is how well the government can enforce this — and what they're going to do when they find violations.
Zoom in: One AI company, Anthropic, took action a month ago that's similar to the bureau's rule by deciding to cut ties with companies if they're more than 50% owned by a parent company in China.
The bottom line: The new rule could bolster the Trump administration's export control policy, but at the cost of imposing heavy compliance burdens that risk slowing down AI ambitions.
2. What the government shutdown means for tech
The government shutdown is set to stall critical tech and science work across federal agencies.
Why it matters: Many tech agencies and offices have already been running on fumes or been gutted thanks to DOGE cuts and consistently low levels of funding from Congress. This is the latest blow.
Here's what we're watching:
The White House's AI plans have some looming deadlines.
- Comments to the Office of Science and Technology Policy on federal regulations that hold back the development and deployment of AI are due later this month.
- The Commerce Department — in consultation with the State Department and OSTP — needs to establish and implement a program to promote AI export packages by Oct. 21, per one of President Trump's AI executive orders.
- OSTP has not come out with a contingency plan, but a spokesperson said the office is "continuing to execute" on the president's AI action plan.
The Commerce Department's contingency plan states that most research work will stop at the National Institute of Standards and Technology.
- "Certain activities of the NIST Center for AI Standards and Innovation that are funded through means other than annual appropriations may continue," the plan says.
- Offices focused on semiconductors will continue during the lapse by using CHIPS and Science Act funding, which is aimed at boosting domestic chip manufacturing, but many employees from the AI standards center and the CHIPS office were let go earlier this year.
- The Broadband Equity Access and Deployment Program, which is meant to expand internet service to rural and underserved areas, will also continue.
Yes, but: While on paper BEAD and CHIPS Act operations are continuing, both Biden-era programs have been attacked by President Trump and were on shaky ground even before the shutdown.
The Federal Trade Commission is furloughing everyone except the last standing Republican commissioners and a few essential staff.
- Hearing and filing deadlines for major antitrust cases could be delayed.
The National Science Foundation is furloughing most of its staff and will stop issuing any new grants.
The Federal Communications Commission is also sending most of its employees home and ceasing work around consumer complaints and spectrum management.
The Small Business Administration's seed fund and programs to help small businesses develop and commercialize technology also lapsed.
The Bureau of Labor Statistics' suspension of operations means economic data, including a tech jobs report, will not be released.
- "Recent changes in the workforce have raised red flags about AI's impact on job opportunities, and functional national statistical agencies are critical to monitoring that impact and shaping our nation's response," Brad Carson, the president of nonprofit Americans for Responsible Innovation, wrote in a letter to administration officials shared first with Axios.
What's next: Many employees who keep the government's programs running are now furloughed, pausing major research efforts and making key tech deadlines tough to meet.
3. Bill spotlight: Independent AI safety panels
The next AI policy idea that could gain traction in the U.S. would give companies some legal immunity from challenges over possible harms if they prove they're adhering to safety standards.
Why it matters: As it becomes increasingly clear that the federal government isn't going to meaningfully regulate AI, this is one model that could pick up steam in states across the country.
Driving the news: AI safety nonprofit Fathom is looking to get more state lawmakers to introduce legislation that would set up a certification regime of voluntary third-party testing panels for AI models and applications.
- California's SB 813 bill, which Fathom backed, would have done that but didn't advance this session.
- Now Fathom is looking to roll out the effort again in the state and others next year, co-founder Andrew Freedman told Axios.
Freedman said a refined version of SB 813 will be introduced in 2026, with new input from a variety of groups and changes including the type of legal protection companies would get from participating.
- He's also expecting two similar bills to roll out in different states soon, with additional states to follow (he declined to name the states).
How it works: Tech and AI companies would opt into being certified by an independent verification organization to ensure they're meeting a heightened standard of care in a risk area, such as children's safety, in exchange for protection from certain levels of legal risk.
- "It doesn't mean there's no risk in the system anymore ... it creates a system of less risk than the industry standard," Freedman said.
- Dean Ball, former AI adviser at the White House Office of Science and Technology Policy and now a policy fellow at Fathom, helped come up with the idea.
What they're saying: "We want to distinguish that this is not going to be a patchwork solution, that it's something that could be adopted and become national, even if it doesn't come federal," Freedman said.
The bottom line: Freedman said certifying bodies for AI safety will emerge no matter what happens in statehouses — but they'll work much better if there's "transparency and accountability" from lawmakers setting specific targets.
4. Standards institute sounds alarm over DeepSeek
A new government report warns that China's DeepSeek models pose risks to national security, even as they trail far behind American competitors on performance and cost.
The big picture: The report could give China hawks in Congress sturdier standing in their efforts to ban DeepSeek on government devices.
- "I am hopeful that this report will encourage more bipartisan support for the No DeepSeek on Government Devices Act and any future legislation to ban harmful AI programs that could be used for malign purposes by our foreign adversaries," Rep. Darin LaHood (R-Ill.) said.
Driving the news: The National Institute of Standards and Technology's Center for AI Standards and Innovation report released on Tuesday marks the first time a government agency has issued a comprehensive assessment of DeepSeek against U.S. frontier AI models.
What's inside: The report presents the center's evaluations of DeepSeek models against three OpenAI models and one from Anthropic. According to the evaluation:
- OpenAI's GPT-5 mini costs 35% less on average to achieve the same results as the best DeepSeek model.
- DeepSeek's most secure model was, on average, 12 times more likely than U.S. frontier models to "follow malicious instructions designed to derail them from user tasks."
- The center also said that DeepSeek models echo Chinese Communist Party narratives more frequently than U.S. models, with "4 times as many inaccurate and misleading" responses on a dataset of "politically sensitive questions."
What they're saying: "CAISI's evaluation confirms what people have long warned: PRC models are easier to subvert, more likely to push CCP narratives, and are spreading fast. It's like Huawei on steroids," said Beacon Global Strategies' Divyansh Kaushik.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government






