Anthropic leaked 500,000 lines of its own source code
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Source material powering Anthropic's Claude Code leaked for the second time in just over a year, publicly exposing the AI coding tool's full architecture, unreleased features and internal model performance data.
Why it matters: The leak hands competitors a detailed unreleased feature roadmap and deepens questions about operational security at a company that sells itself as the safety-first AI lab.
State of play: A file used internally for debugging, was accidentally bundled into a routine update of Claude Code and pushed to the public registry developers use to download and update software packages.
- The file, which was quickly discovered by Chaofan Shou, pointed to a zip archive on Anthropic's own cloud storage containing the full source code, with nearly 2,000 files and 500,000 lines of code.
- Within hours, the codebase was mirrored and dissected across GitHub, quickly amassing thousands of stars.
What they're saying: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Axios.
- "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
Zoom in: The leaked code contained dozens of feature flags for capabilities that appear fully built but haven't shipped, according to an Anthropic spokesperson, including:
- The ability for Claude to review what was done in its latest session to study for improvements in the future while transferring learnings across conversations.
- A "persistent assistant" running in background mode that lets Claude Code keep working even when a user is idle.
- Remote capabilities, allowing users to control Claude from a phone or another browser, which was already rolled out for Claude Code.
Between the lines: Outside developers have already reverse-engineered Claude Code, prompting a takedown notice from Anthropic, according to TechCrunch.
- What's new is the roadmap: a clear picture of how Anthropic is building toward longer autonomous tasks, deeper memory and multi-agent collaboration.
- Those kinds of updates could be a boon for Anthropic's enterprise push, which is the core driver of its revenue strategy, as the AI lab prepares to go public.
Thought bubble: How AI companies lock down and secure their own systems is now just as important as how other organizations fend off hackers using these AI tools in their attacks, writes Sam Sabin, author of the weekly Future of Cybersecurity newsletter.
The bottom line: The leak won't sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.
- And the company that markets itself as the safety-first AI lab just shipped its own source code to the public making it Anthropic's second major security blunder in a week.
