Axios Future of Cybersecurity Thought Bubble

December 05, 2025
😎 TGIF, everyone! I'm popping in with some thoughts from our AI+ Summit in San Francisco yesterday.
- 📺 Missed out? Tune into some of our conversations with leading AI policymakers and executives here.
📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 493 words, a 2-minute read.
1 big thing: Defenders buckle down for anticipated AI attacks
Even the CEO of Palo Alto Networks, a leading cyber defense company, expects attackers to outpace defenders in their deployment of AI tools.
Why it matters: Sure, that's partly a sales pitch — buy more Palo Alto Networks to prepare for the onslaught! — but it's also a remarkable reality check from the CEO of a company that's providing the solutions to fend off these attacks.
- And it's a warning that's being echoed beyond the walls of major cybersecurity firms.
What he's saying: "We can build all the tools you want; the customers have to embrace that strategy quickly," Palo Alto Networks CEO Nikesh Arora told me onstage at the AI+ Summit in San Francisco yesterday.
- "If the customers don't let us in and work with them to put all the plumbing in place, which can take awhile, we're not going to be able to respond as quickly."
The big picture: When ChatGPT and its competitors first hit consumers in late 2022, security was top of mind. There was an active discourse about securing AI tools by design to fend off new threats.
- But as the big AI players plowed ahead, the focus on security waned. "AI security is not talked about," Arora said.
- "If you spend more than 2% of time thinking about security, you're slower than your competitor," Arora added. "So they're not."
Between the lines: Those gaps are creating more entry points for hackers.
- U.S. AI developers are increasingly relying on open-source models built in China and elsewhere to power their tools.
- Hackers are eyeing ways to overrun AI agents and use them to exfiltrate data.
Meanwhile, attackers are getting savvier. Arora said his company has seen more deepfake fraud on the consumer side and attackers using AI to speed up attacks to steal data.
- "The AI startup world, the AI LLM world, the vibe coding world, it's the Wild West from a security perspective," Arora said.
Reality check: Part of the problem is that defenders need to vet what is powering these AI tools.
- Inside Palo Alto Networks, employees aren't yet using coding assistants for that exact reason, Arora said.
- "My team is still not using any of the vibe coding agents out there, because we don't know what the models are behind them," he said. "I can't put my source code out in the public domain in a model because my source code is securing 82,000 companies."
What to watch: How quickly companies take up AI-powered tools to detect and remediate security vulnerabilities on their networks.
☀️ See y'all Tuesday!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity Thought Bubble


