The race to catch Claude
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Supremacy can be fleeting in the highly competitive AI race. But two months into 2026, Anthropic's Claude is upending U.S. national security, roiling financial markets and redefining how startups are built.
Why it matters: The company is in the middle of the most important fight of the era — how much power to give AI in the face of threats, real and virtual.
- Anthropic said this week it would soften the central commitment of its flagship safety framework, acknowledging that unilateral safety pledges won't survive a world where rivals have no such constraints.
- For a company that has long positioned itself as the AI industry's conscience, it was a remarkable reversal — on the same day that the Pentagon threatened to effectively kick Claude out of government in a fight over its appropriate military uses.
The big picture: Two years ago, Anthropic was virtually unknown outside of San Francisco. Today, the startup is valued at $380 billion — raising $30 billion this month from some of the biggest financial and tech investors in America.
- Practitioners consistently rank Claude above rivals for complex reasoning, nuanced writing and reliability.
State of play: Rivals are scrambling to catch up. OpenAI is expected, as soon as Thursday, to release ChatGPT 5.3, or "Garlic" — the product of CEO Sam Altman's "code red" directive in December to speed development in response to pressure from Google.
- But the biggest wild card is China, where the upcoming release of DeepSeek's V4 model threatens to reignite a U.S. market panic that wiped $1 trillion from tech stocks last January.
Zoom out: For now, Claude has established its dominance across three engines of American power.
1. In Washington, Anthropic is locked in a high-stakes dispute with the Pentagon over whether Claude can be used for mass surveillance of Americans (which the Pentagon says is already illegal), or for lethal weapon systems that don't require human involvement.
- The tension reflects a blunt calculus: The Pentagon views Claude as the best-performing model. Replacing it would be costly and disruptive. "The problem for these guys is they are that good," a defense official told Axios.
- Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei until Friday to loosen Claude's military guardrails, or face a potential "supply chain risk" designation.
- If that happens, anyone doing business with the Pentagon would be required to certify they don't use Claude — a potentially massive disruption, given it's already used by eight of the 10 largest U.S. companies.
2. On Wall Street, new releases by Anthropic have triggered five separate stock market gyrations in four weeks — a phenomenon traders have dubbed the "SaaSpocalypse."
- Feb. 3: Cowork legal plugins wipe out $285 billion in market value. Thomson Reuters plunges nearly 16% — its worst single day on record. LegalZoom craters 20%. FactSet drops more than 10%.
- Feb. 6: Claude Opus 4.6 launches and financial data stocks bleed again. The Nasdaq posts its worst two-day tumble since April.
- Feb. 20: Claude Code Security hits cybersecurity. CrowdStrike down 8%. Cloudflare down 8%. JFrog down 25%.
- Feb. 23: A Claude blog post about automating legacy bank code sends IBM to its worst single day since October 2000 — $31 billion gone by the closing bell.
- Feb. 24: Anthropic launches job-specific tools. Victims of the first wave — FactSet, DocuSign and Thomson Reuters — all rally after revealing new partnerships with Claude.
3. In Silicon Valley, Claude Code has become an obsession among venture capitalists and engineers who see it as the foundation of a new era of AI-native and agentic startups.
- Engineers describe Claude Code as the first tool that genuinely compresses development timelines from weeks to hours, allowing small teams to ship what once required entire departments.
- The surge has intensified pressure on OpenAI, which launched a competing version of its Codex app earlier this month.
The bottom line: Anthropic was built on the philosophy that the safest AI would also be the best AI. Holding that line is shaping up to be Claude's biggest test yet.
