AI's mass surveillance problem
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Brendan Lynch/Axios
The Pentagon’s standoff with Anthropic highlights a mass surveillance reality: There are few laws limiting what the government can do with artificial intelligence.
Why it matters: AI's evolving technology enables scenarios that may be widely unpopular, but fully legal.
State of play: One of Anthropic's stated red lines was barring its AI system from mass domestic surveillance.
- "AI-driven mass surveillance presents serious, novel risks to our fundamental liberties," Anthropic CEO Dario Amodei wrote.
- "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he also wrote.
- The Pentagon, meanwhile, wanted the ability to use AI for essentially any purpose allowed by law.
Between the lines: Letting the Pentagon deploy AI for anything that is legal would give it sweeping discretion given that Congress has yet to establish clear guardrails.
- That's compounded by the lack of federal privacy protections or limits on what the government can do with commercially available data.
- "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress," Amodei's statement said.
- OpenAI's deal with the Pentagon doesn't explicitly prohibit this either, Axios reported on Sunday.
AI advances have supercharged surveillance. The tools allow anyone with access to them to combine and analyze massive amounts of data in novel ways, as Anthropic highlighted.
- "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life — automatically and at massive scale," Amodei wrote.
What they're saying: "We're at a point right now where neither having the Pentagon write the rules, whatever those might be, nor having a company, even one as presumably as well intentioned as Anthropic, making decisions about this is a particularly good place to be as a democracy," said Steve Feldstein, senior fellow at the Carnegie Endowment for International Peace.
- "The idea of surveillance that overreaches legal mandates has been an ongoing concern, but with AI, it gets supercharged," Feldstein said. "It happens at scale, and I think updated rules are needed."
"It is completely reasonable for the Pentagon to want full control of its capabilities consistent with the law," said Vivek Chilukuri, senior fellow at the Center for a New American Security.
- "But the lack of clear and current rules for advanced AI systems, and a meaningful public debate about what those rules ought to be, can breed the distrust between government and industry that helped propel this recent, needlessly destructive, dispute."
The other side: A former Pentagon official who worked on AI told Axios the department has sufficient policy governing artificial intelligence and autonomous weapons to avoid a regulatory vacuum.
- In his view, the dispute stems from Anthropic's discomfort with how the Pentagon might use Claude, regardless of legal precedent.
- "This is about personalities and politics much more than real policy disagreements, especially since Anthropic is willing to work with the Pentagon even on making LLMs capable of powering autonomous weapon systems," said Michael Horowitz, the former Pentagon official and now professor at the University of Pennsylvania.
The bottom line: The Pentagon-Anthropic clash exposes how quickly AI capabilities are advancing beyond the legal framework meant to contain them.

