U.S. ramps up frontier AI testing as White House pivots toward safety
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
The government is deepening its oversight of cutting-edge AI, signing new agreements with Google DeepMind, Microsoft and xAI to test powerful models, according to a Commerce Department announcement.
Why it matters: As AI systems grow more powerful and potentially risky, officials want to understand their security implications before they hit the market, even in a Trump administration focused on accelerating AI innovation.
- It marks a sharp change from the White House's approach of prioritizing rapid innovation without guardrails in a bid to beat China.
Driving the news: The announcement comes a day after reports the Trump administration is considering increased oversight of AI models via potential executive action on cybersecurity and pre-clearance of new models.
- In addition to Google, Microsoft and xAI, a spokesperson said that previously announced partnerships with Anthropic and OpenAI — first launched in 2024 — are "ongoing and reflect updated MOUs."
- Per the release, those deals "have been renegotiated" to reflect the Center for AI Standards and Innovation's directives the Commerce secretary and President Trump's AI action plan.
How it works: CAISI will "conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security," according to Commerce.
- The agreements allow for government evaluations of models before public release, as well as post-deployment assessments and related research.
What they're saying: "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI director Chris Fall in a statement.
- "These expanded industry collaborations help us scale our work in the public interest at a critical moment."
- Fall was recently announced as director of CAISI after former Anthropic staffer Collin Burns was reportedly pushed out after just four days on the job.
The big picture: Under the Biden administration, a 2023 executive order established the AI Safety Institute, which was re-named under the Trump administration.
- As Axios previously reported, CAISI underwent significant changes at the beginning of Trump's term, and was expected to pivot from AI safety to AI acceleration.
- But the institute has continued conducting AI testing and evaluations, publishing an evaluation of China's DeepSeek and soliciting comment on secure deployment of AI agents.
