Exclusive: Palo Alto Networks says new AI models found 7x more vulnerabilities
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Brendan Lynch/Axios
Palo Alto Networks says it found 75 vulnerabilities in its products — more than seven times the amount it usually finds in a month — after beginning to use advanced AI cybersecurity models from Anthropic and OpenAI.
Why it matters: The cybersecurity giant is among the first companies with access to Anthropic's Mythos Preview and OpenAI's GPT-5.5-Cyber, offering an early glimpse at what parts of the industry have started calling a coming "vulnpocalypse."
Driving the news: Palo Alto Networks now estimates organizations have just three to five months before attackers broadly gain access to the capabilities of frontier AI cyber models.
- Palo Alto Networks is among a small group of organizations with access to both Mythos and OpenAI's cyber-focused models.
- Over the past month, the company scanned more than 130 products for software flaws, uncovering 75 legitimate vulnerabilities that have since been patched. None of those vulnerabilities were actively being exploited in the wild.
- Usually the company finds and discloses an average of 5-10 vulnerabilities per month.
Zoom in: Many of the vulnerabilities stood out because the models were able to identify ways to chain multiple flaws together into a working exploit path — which earlier AI systems struggled to do, Chief Product Officer Lee Klarich told Axios.
- The models appeared especially adept at understanding the "logic" of how applications worked and then identifying how attackers might exploit combinations of weaknesses, Klarich said.
- In several cases, Palo Alto Networks said, the individual flaws might not have warranted disclosure on their own but became high-severity vulnerabilities when combined together.
- During internal testing, Palo Alto Networks found the models generated working exploits more than 70% of the time. "These models are much better at writing working exploits than what we had seen before," Klarich said.
Reality check: Finding the vulnerabilities still required extensive human expertise and customization, Klarich said.
- Palo Alto Networks experienced an average false-positive rate of roughly 30%, though that varied widely depending on how researchers trained the models and what contextual information they provided.
- The company spent significant time building what Klarich described as an "AI-scanning harness" to feed the models threat intelligence, context and operational guardrails.
- "These models aren't magic," Klarich said. "We spent a tremendous amount of time building an AI-scanning harness and that harness is how we connect the model to whatever we're going to scan."
The big picture: Companies and governments have spent the last month scrambling to assess how to defend against a future where attackers have access to the vulnerability-hunting capabilities of models like Mythos and GPT-5.5-Cyber.
- Klarich said Anthropic's and OpenAI's models are similarly powerful, but tend to identify different types of vulnerabilities.
- That means organizations should use multiple models in parallel to uncover the widest range of flaws, he said.
Between the lines: Palo Alto Networks is urging organizations to take a four-pronged approach to defending against AI-assisted cyberattacks.
- Build the ability to find and patch vulnerabilities before attackers can exploit them.
- Reduce internet-facing exposure so only essential systems remain publicly accessible.
- Deploy automated detection and prevention tools capable of blocking attacks in real time.
- Integrate AI and automation into security operations centers so defenders can respond at machine speed.
What to watch: The White House is actively debating proposals for testing and restricting advanced AI models with powerful cybersecurity capabilities before wider deployment.
