Scoop: Anthropic plans national security expansion
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Anthropic is looking to expand how its AI models can be used by the government for national security purposes, a source familiar told Axios.
Why it matters: The Trump administration is focused on supercharging government adoption of AI, and Anthropic's moves aim to serve that.
- But the government needs to balance the use of AI to protect against foreign threats with the handling of sensitive data and classified work.
Behind the scenes: For months, Anthropic has been thinking through how its policies should be adjusted as frontier AI capabilities and reliability across the industry have improved, the source said.
- That progress opened up the possibility for national security use cases to be expanded in a safe way and boost government adoption, per the source.
- A company spokesperson did not respond to a request for comment.
Anthropic is planning on expanding its policies in four ways:
1. Customers like the Defense Department would be able to use Anthropic's Claude Gov models to deploy and conduct cyber operations, with a human in the loop.
- Right now, Claude is just used for tasks like cyber threat analysis, the source familiar said.
2. Claude would be enabled to make recommendations about foreign intelligence that's collected, beyond just analyzing the intelligence.
3. Customers would be able to generate content for military purposes, such as simulating war gaming scenarios or creating training materials for military and intelligence officers.
4. Anthropic would also offer sandbox environments for customers to explore potential future uses — a practice that was restricted before.
Catch up quick: Anthropic in June introduced a custom set of Claude Gov models exclusively for national security uses.
- These expansions build on that.
