Axios AI+

March 02, 2026
π Claude is out this morning according to Downdetector. The company attributes this to a status update, and it comes after a surge in popularity for the chatbot. More on that below.
Today's edition is 1,150 words, a 4.5-minute read.
1 big thing: AI's mass surveillance problem
The Pentagonβs standoff with Anthropic highlights a mass surveillance reality: Few laws limit what the government can do with artificial intelligence.
Why it matters: AI's evolving technology enables scenarios that may be widely unpopular, but fully legal.
State of play: One of Anthropic's stated red lines was barring its AI system from mass domestic surveillance.
- "AI-driven mass surveillance presents serious, novel risks to our fundamental liberties," Anthropic CEO Dario Amodei wrote.
- "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he also wrote.
- The Pentagon, meanwhile, wanted the ability to use AI for essentially any purpose allowed by law.
Between the lines: Letting the Pentagon deploy AI for anything that is legal would give it sweeping discretion given that Congress has yet to establish clear guardrails.
- That's compounded by the lack of federal privacy protections or limits on what the government can do with commercially available data.
- "For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress," Amodei's statement said.
- OpenAI's deal with the Pentagon doesn't explicitly prohibit this either, Axios reported yesterday.
AI advances simplify surveillance. The tools allow anyone with access to them to combine and analyze massive amounts of data in novel ways, as Anthropic highlighted.
- "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life β automatically and at massive scale," Amodei wrote.
What they're saying: "We're at a point right now where neither having the Pentagon write the rules, whatever those might be, nor having a company, even one as presumably as well intentioned as Anthropic, making decisions about this is a particularly good place to be as a democracy," said Steve Feldstein, senior fellow at the Carnegie Endowment for International Peace.
- "The idea of surveillance that overreaches legal mandates has been an ongoing concern, but with AI, it gets supercharged," Feldstein said. "It happens at scale, and I think updated rules are needed."
"It is completely reasonable for the Pentagon to want full control of its capabilities consistent with the law," said Vivek Chilukuri, senior fellow at the Center for a New American Security.
- "But the lack of clear and current rules for advanced AI systems, and a meaningful public debate about what those rules ought to be, can breed the distrust between government and industry that helped propel this recent, needlessly destructive, dispute."
The other side: A former Pentagon official who worked on AI told Axios the department has sufficient policy governing artificial intelligence and autonomous weapons to avoid a regulatory vacuum.
- In his view, the dispute stems from Anthropic's discomfort with how the Pentagon might use Claude, regardless of legal precedent.
- "This is about personalities and politics much more than real policy disagreements, especially since Anthropic is willing to work with the Pentagon even on making LLMs capable of powering autonomous weapon systems," said Michael Horowitz, the former Pentagon official and now professor at the University of Pennsylvania.
The bottom line: The Pentagon-Anthropic clash exposes how quickly AI capabilities are advancing beyond the legal framework meant to contain them.
2. Claude dethrones ChatGPT as top U.S. app
Anthropic's Claude hit No. 1 in U.S. app downloads Saturday, overtaking ChatGPT, after the Pentagon blacklisted the company for refusing to loosen safeguards for military use of its AI model.
Why it matters: The long-term business impact for Anthropic remains unclear. But in the short term, the clash has fueled interest in Claude, as some social media users call for dumping ChatGPT over OpenAI's deal with the Pentagon.
Catch up quick: Anthropic lost its Pentagon contract Friday over a dispute about military use of Claude, shortly after President Trump blasted it as a "Radical Left AI company."
- Anthropic drew red lines against using its AI for mass surveillance and autonomous lethal weapons.
- Hours later, OpenAI β maker of ChatGPT β announced its own Pentagon deal after the Defense Department agreed to red lines similar to Anthropic's.
- Anthropic vowed to "challenge any supply chain risk designation in court," and said no "intimidation or punishment from the Department of War" would force it to cave.
Zoom in: The Instagram account "quitGPT" gained about 10,000 followers following the news, according to its operator.
- A Reddit post about OpenAI winning the Pentagon contract racked up 38,000 upvotes under a post saying "Cancel and Delete ChatGPT!!!"
- A viral video appeared to show chalk art outside Anthropic's offices in San Francisco reading "you give us courage."
- People on social media have pointed to OpenAI president Greg Brockman's previous $25 million donation to a pro-Trump super PAC as part of their rationale for moving away from ChatGPT.
By the numbers: Data from OpenRouter measuring model usage over the last month shows twelve different models now outpace OpenAI.
- Claude's Sonnet 4.5 ranked fifth for February, according to the data, and the top model was from China-based MiniMax.
Reality check: ChatGPT is still right behind Claude on the app store charts and has a first-mover advantage on AI.
What's next: While the winners and losers of the AI race are constantly shifting, Claude has been on fire, particularly with enterprises after several product launches made for swift business adoption.
- Now, a blowup with its own government is spreading that popularity into the broader zeitgeist.
3. Amazon's OpenAI deal aims to leapfrog Microsoft
A deal with OpenAI announced Friday could put Amazon first to market with a new type of AI service for developers β but the two companies will have to tread carefully to avoid running afoul of OpenAI's deal with Microsoft.
Why it matters: If successful, Amazon Web Services could land in a more enviable position at the leading edge of generative AI rather than being forced to compete mainly by delivering models more cheaply than rival clouds.
Driving the news: The multipronged deal calls for Amazon to invest up to $50 billion in OpenAI, with Amazon getting access to a variety of OpenAI services.
4. Training data
- Amazon said power to its AWS data center in the United Arab Emirates was halted Sunday after it was struck by objects, creating sparks and igniting a fire. (Reuters)
- Nvidia will invest $4 billion in two data center optics firms, continuing its expansive investments into several different parts of the tech stack. (Bloomberg)
5. + This
Developer Lyra Rebane has managed to create a browser-based x86 computer emulator that was written entirely using the web technology known as Cascading Style Sheets (CSS) β no JavaScript needed.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+






