OpenAI isn't far behind Mythos' hacking powers
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
New research suggests that OpenAI's GPT-5.5 model — aka Spud — is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview.
Why it matters: The head start that cyber defenders were promised when Mythos was unveiled last month is disappearing faster than expected.
Driving the news: The U.K. AI Security Institute said Thursday that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs. Mythos did the same in 3 out of 10 runs.
- Before Mythos, no AI model had ever successfully completed that test.
- GPT-5.5 also outperformed Mythos on a range of capture-the-flag tasks that test how well a model can find vulnerabilities, reverse-engineer incidents, and exploit web-based applications.
Between the lines: When Mythos was announced, Anthropic estimated it would be another six to 18 months before another AI company released a model with similar cyber capabilities.
- Now, that assumption is being tested, calling into question how much time government officials, critical infrastructure operators and cybersecurity companies have to beef up their defenses.
Yes, but: The powerful cyber capabilities of both Mythos and GPT-5.5 aren't available to everyone.
- Anthropic has given access to Mythos to only around 40 organizations, including the 12 members of its information-sharing partnership Project Glasswing.
- OpenAI has placed strict guardrails on the public versions of its models and is only giving access to models with fewer guardrails to vetted cyber defenders through its Trusted Access program.
What to watch: Last week, the Wall Street Journal reported that the White House had urged Anthropic not to broaden access to Mythos over national security concerns.
- Meanwhile, OpenAI has been helping federal agencies, state and local governments, and international allies sign up for its program that gives cyber defenders access to cyber-permissible versions of GPT-5.4 and 5.5.
Go deeper: Trump administration considering safety review for new AI models
