Exclusive: AI will supercharge cyber weapons within two years, experts warn
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
The world has about two years to prepare for AI-powered cyber weapons capable of evading current security tools, a NATO-backed security startup warned in a new report shared first with Axios.
Why it matters: Companies need to start budgeting for better cyber defenses right now.
- Governments also need to quickly coordinate policies for responsibly using AI and securing critical infrastructure like energy grids and railways.
Driving the news: Goldilock, a U.K. cybersecurity startup focused on critical infrastructure security, says agentic malware will become a reality in two years.
- This malware would have the ability to worm its way through computer networks just like the infamous Stuxnet malware — but this time it could automatically update and adapt to a computer system to evade detection, the report warned.
Flashback: Discovered in 2010, Stuxnet is a computer worm likely deployed by the U.S. and Israel to target Iran's nuclear program.
- The worm exploited zero-day vulnerabilities to gain access to Siemens' industrial control systems. The attack may have destroyed upward of 1,000 Iranian centrifuges.
- An AI-powered Stuxnet could be worse: Instead of targeting specific systems, the malware would theoretically identify new targets on its own and automatically compromise them, Goldilock warns.
The big picture: Heightened global threats and instability are creating a ripe environment for adversaries to invest in AI-powered cyber weapons, Stephen Kines, co-founder and COO of Goldilock, told Axios.
- Officials also say China could pursue an invasion of Taiwan as soon as 2027 — just two years from now.
Threat level: Goldilock predicts that energy grids, transportation networks, financial institutions and health care systems are the most at risk for agentic malware.
- That's because foreign governments are likely to develop and deploy this kind of malware first with the hopes of causing societal panic in the U.S.
- Shutting down an electric grid or disrupting hospital operations is a surefire way to achieve that goal.
Between the lines: Kines is most concerned about the rate at which AI is being deployed and developed — with limited guardrails.
- As of now, few hurdles exist to stop adversarial countries or cyber criminal gangs from developing their own agentic malware.
- "If we don't harness some better cybersecurity, then we have problems," Kines said. "And the reality is that Big Tech has not kept up."
The intrigue: Depending on who you ask, AI-powered security tools could go a long way to stymieing AI-powered malware.
- But Kines says that just fighting AI with AI alone won't solve the problem.
- "Because AI has been democratized, and anybody can use it, learn it, take existing code and apply it," Kines said. "You're never going to win that code war."
Reality check: Goldilock offers a remote "kill switch" to disconnect servers from the rest of a critical infrastructure company's systems as soon as malicious activity is detected.
- The company argues that this type of network segmentation is key for critical infrastructure, whose teams often have to go on-site to a power plant or similar location to manually disconnect cables when they detect a cyberthreat.
- Kines also said this type of network segmentation is needed to stop an AI-enhanced Stuxnet, alongside AI-powered cyber defenses.
The bottom line: Organizations need to invest in AI-enhanced threat intelligence, network segmentation tools and AI-based detection systems, the report advised.
- Corporations also must start working with one another, and the public sector, to share threat intelligence in real time.
- And government agencies need to quickly invest resources to foster AI-driven cyber tools.
