OpenAI-Pentagon deal faces same safety concerns that plagued Anthropic talks
Add Axios as your preferred source to
see more of our stories on Google.

OpenAI CEO Sam Altman (center) and Anthropic CEO Dario Amodei (right) pose with Indian PM Narendra Modi. Photo: Ludovic Marin / AFP via Getty Images
OpenAI's new deal with the Pentagon does not explicitly prohibit the collection of Americans' publicly available information — a sticking point that rival Anthropic says is crucial for ensuring domestic mass surveillance doesn't take place.
Why it matters: OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and the Pentagon's lead AI negotiator Emil Michael all say they care about civil liberties, but disagree on whether the law today offers enough protections for AI use.
- Altman was asked thousands of questions about his contract with the Pentagon during an "ask me anything" on X Saturday night, including whether he was worried there would be a dispute later on with the Pentagon over what's legal or not.
- "Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk."
State of play: Friday night, the Pentagon said it would blacklist Anthropic. As of Saturday night, no formal language designating Anthropic a "supply chain risk" has been sent, according to a source familiar.
- Altman pushed for deescalation. "To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it."
- The dispute is at the heart of an extraordinary blowup over the last week that saw the Pentagon first praise Anthropic's Claude as best-in-class, and then declare it the kind of risk usually reserved for Chinese tech giants.
- It's become an existential moment for the American AI industry, with a former top Trump adviser likening it to "attempted corporate murder."
Zoom out: Anthropic contends the law today does not contemplate AI and, for that reason, asked the Pentagon to explicitly include in their contract that they cannot collect Americans' public information in bulk. The Pentagon refused.
- That would include geolocation, web browsing data or personal financial information purchased from data brokers.
- While all that data is legal to collect, Anthropic feared that artificial intelligence could supercharge that collection and the subsequent surveillance of Americans.
The language in OpenAI's contract is specifically about the "unconstrained" collection of Americans' private information — not public information that critics say can also lead to technically legal mass surveillance.
- There is also a provision regarding autonomous weapons, which some are concerned can be changed by the Pentagon at will.
- "We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here. I think Anthropic may have wanted more operational control than we did," Altman said on X.
Between the lines: The Pentagon wants to use AI models for "all lawful purposes" without caveats.
- The Pentagon "does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with laws, regulations, the Constitution's protections for American's civil liberties," the Pentagon's Michael said on X Saturday.
- "The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that," Michael added.
- OpenAI agreed to the Pentagon's "all lawful purposes" standard and said that in addition to "strong existing protections in U.S. law," it retains full discretion over its own safety stack, which the company says has strong contractual protections.
What they're saying: "Publicly available information can only be used by the military for defense and intelligence purposes if it's tied to authorized national-security missions," an OpenAI spokesperson said.
- "The military cannot use it for ordinary domestic law enforcement or to target Americans without a lawful defense or intelligence purpose, and strict oversight and privacy limits apply," the spokesperson added.
The intrigue: Before the blacklisting, administration officials made the dispute personal, including Trump himself, who said Anthropic is full of "radical leftists," and Michael, who said Amodei is a "liar" with a "God complex."
- In the Pentagon's view, Anthropic's "virtue signaling" is what made the fight personal, a senior Pentagon official said.
- Altman and his company, meanwhile, have managed to stay out of the administration's crosshairs. (His OpenAI co-founder Greg Brockman is reported to be one of the top individual donors to pro-Trump super PACs.)
The bottom line: Personal insults and allegations of virtue signaling aside, the break up with Anthropic came down to the Pentagon's views of how it should be allowed to use AI for national security.
- "This was never personal for us. At the end of the day, this was the Department wanting to use Anthropic for all lawful purposes. That's what it's been about since day one," the senior Pentagon official said.
Editor's note: This story has been updated with a statement from OpenAI.
