Anthropic: No "kill switch" for AI in classified settings
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios. Stock: Getty Images
Anthropic says it has no way to control or shut down its AI models once they're deployed by the Pentagon, according to a new court filing.
Why it matters: The Pentagon designated Anthropic a supply chain risk, contending the AI firm is inappropriately getting involved in how its technology can be used in sensitive military operations.
What's inside: Anthropic argues in the filing to a federal appeals court in D.C. that it has no visibility, technical ability or any kind of "kill switch" for its technology once it's deployed.
- The company also says the Pentagon has the opportunity to test models before deployment.
Catch up quick: The company's usage policies include no Claude for autonomous weapons or mass surveillance, red lines that the Pentagon dismissed as red herrings and led to the dispute.
- The D.C. court previously rejected Anthropic's request for a pause on the supply chain risk designation. A judge in California for an ongoing parallel case granted Anthropic's request.
- The split decision means Anthropic can't participate in new Pentagon contracts, but can continue working with other government agencies while the litigation plays out.
Friction point: The Pentagon is arguing in court that Anthropic is a supply chain risk as the Trump administration moves to deploy its new Mythos model across the federal government.
- Now, agency heads are scrambling to figure out how they can protect their systems from cyber attacks using Mythos, potentially complicating the administration's argument that the company poses a national security risk.
What's next: A hearing is scheduled for May 19.
