AI ransomware attacks are coming
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Ransomware gangs are already starting to embed AI into their workflows, allowing them to fine-tune and amplify attacks that have already stolen billions from U.S. corporations.
Why it matters: Most cases of cyber criminals using AI are still outliers, security responders say, but AI tools promise to accelerate the data-stealing, file-encrypting cyberattacks that have wreaked havoc across industries.
The big picture: Just like everyone else, ransomware gangs have been playing with generative AI tools for a while. Researchers have seen hackers using AI chatbots to negotiate ransom payments, write code and perfect their social engineering attacks.
- Security analysts at cybersecurity firm ReliaQuest said in a report Tuesday that 80% of the ransomware-as-a-service groups they observe are now offering automation or other AI tools on their platform.
- A group of NYU researchers published a paper in August, showing they could build a proof of concept using local LLMs to "autonomously plan, adapt and execute the ransomware attack lifecycle."
- Researchers at Palo Alto Networks also observed cyber criminals using AI-generated audio and video to impersonate employees as part of help desk scams — a tactic used to gain access before deploying ransomware.
Yes, but: Most ransomware gangs still don't have much incentive to tap AI tools when their cheaper, less sophisticated tactics still work so well.
- Rafe Pilling, director of threat intelligence at Sophos, told Axios that AI use is the "exception, and not the norm" as of now.
- Many of the hackers who are experimenting with AI tools appear to be affiliates focused on gaining access to organizations, Tony Anscombe, chief security evangelist at cybersecurity firm ESET, told Axios.
- "There's just so much low-hanging fruit out there," Anscombe said.
Threat level: Ransomware has already accounted for 91% of all incurred losses among cyber risk firm Resilience's customer base in the first half of 2025, according to data published in September.
- That could get worse once AI becomes more commonplace. In May, researchers at Palo Alto Networks found they could simulate a ransomware attack using AI in just 25 minutes, from initial compromise to data exfiltration.
- Microsoft also said in a report last week that adversaries are already starting to use AI tools to identify vulnerabilities, generate malware and improve their phishing campaigns.
Zoom in: Anthropic banned an account that was tied to a U.K. cyber criminal group that was using its Claude model to "develop, market, and distribute ransomware," according to Anthropic's August threat intelligence report.
- The group has only been active since January, but their tactics have advanced quickly, the company noted in the report, suggesting that Claude filled in the gaps for their "limited technical expertise."
- The hackers appeared unable to carry out encryption and other basic tactics without Claude's help. Yet, they were still selling viable ransomware packages for $400 to $1,200, according to the report
Between the lines: While most current cyber criminal gangs don't have the incentives to switch over to pricey AI tools, it's likely the next generation of ransomware actors will be AI natives who are keen to automate the entire process, Pilling said.
- "They'll be better at the [AI] tech, but worse at the ransomware and then that will kind of get better over time," he added.
- Anscombe foresees the target of ransomware attacks changing from stealing sensitive files to poisoning internal AI models.
- "It would be really hard to detect if somebody did do this," he said.
What to watch: Cybersecurity vendors are already using AI technologies to bulk up their ransomware detection tools.
