Congress stalls on military AI as Google and the Pentagon strike deal
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Congress is far from passing military AI guardrails as Google and the Pentagon sign a new contract more permissive than for rival companies like OpenAI.
Why it matters: AI companies say they have red lines around military misuse. But barring new laws on the books, contracts with the Pentagon could remain susceptible to loopholes and workarounds.
Driving the news: The Pentagon this week reached an agreement with Google to use its model for "all lawful use," a source familiar confirmed.
- Google's Gemini can now be used for classified settings, and the contract is reportedly more permissive than OpenAI's.
- OpenAI says it retains "full discretion" over its safety mechanisms while Google agreed to adjust its safety settings at the government's request, according to The Information, which first reported the deal.
- DeepMind research scientist Alex Turner criticized the agreement, posting that Google "can't veto usage" and is relying on "aspirational language with no legal restrictions."
The big picture: The Pentagon-Google deal comes as AI labs develop more powerful models and the technology's use comes under scrutiny amid the Iran war.
- Outside groups are hopeful that recent developments will supercharge efforts on the Hill to include military AI safeguards in the annual defense policy bill.
Friction point: Google and OpenAI both say they have the same red lines: no AI for autonomous weapons or mass surveillance.
- "We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight," a Google spokesperson said.
- Critics argue those commitments rely on contract language that is not legally binding.
Zoom in: Anthropic is the only major AI lab that has not struck a deal with the Pentagon.
- The Defense Department continues to use Anthropic's models while litigation plays out over its supply chain risk designation, and efforts are underway to give the broader government access.
On Capitol Hill, momentum has been slow, but advocates say that attention is now picking up.
- Hamza Chaudhry of Future of Life Institute, a nonprofit advocacy group, is pushing for greater transparency around AI company dealings with the Pentagon, along with modernizing the AI testing and verification process before systems are deployed.
- Congressional attention hasn't matched "the deepening linkage between an unprecedentedly powerful technology being built wholly in the private sector and the most powerful warfighter in the world," said Chaudhry, adding there has been an uptick in congressional outreach in recent weeks.
Tech advocacy group Americans for Responsible Innovation is calling on Congress to codify a "five-second rule" to ensure that there's meaningful human control for AI weapons.
- "Without at least a few seconds to process, the human operator can collapse into a rubber-stamping function," ARI says in a new white paper.
- The group also calls for a thorough congressional verification of AI systems before a contract is awarded.
What we're watching: Committees in both chambers are on track to mark up the National Defense Authorization Act this summer.
