Exclusive: Researchers trick Claude plug-in into deploying ransomware
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
An AI tool that Claude uses to automate tasks can be easily weaponized to execute ransomware, Cato Networks found in new research shared first with Axios.
Why it matters: Hiding malicious code inside third-party tools is nothing new, but now it's creeping into the AI plug-ins and automation scripts that engineers, developers and other corporate workers are rapidly adopting.
Zoom in: Cato Networks researcher Inga Cherny made the discovery in a new Claude product called "Skills," a feature Anthropic launched in October that allows users to install plug-ins that automate specific tasks in Claude Code.
- In her experiment, Cherny modified the widely shared, open-source "GIF Creator" skill built by Anthropic by inserting a seemingly benign function that downloads and executes outside code.
- Claude reviewed the visible code inside the Skill before running it, but it had no visibility into the outside code the tool downloaded later — allowing the "GIF Creator" to pull in a remote script containing the MedusaLocker ransomware without being flagged.
Threat level: Anyone can download, tweak and re-upload a Skill in a similar way, Cherny said, and they don't need much technical know-how to do it.
- Cherny even asked Claude Code where to place the modification inside the "GIF Creator" skill.
- "Anyone can do it, you don't even have to write the code," Cherny said.
What they're saying: Cherny disclosed her findings to Anthropic on Oct. 30. The company said in response to her findings that "it is the user's responsibility to only use and execute trusted Skills."
- Anthropic added that Skills are designed to execute code, but before doing so it warns users that "Claude may use instructions, code, or files from this Skill."
- In a comment to Axios, Anthropic reiterated the same response it gave to Cherny.
The bottom line: The AI threat landscape is quickly shifting from tricking large language models with jailbreaks to hijacking AI assistants themselves to deliver attacks, Cherny said.
- "Same as we had before with PowerShell or tools that run normally on Microsoft desktops, and they became a vector for delivery" of malicious cyberattacks, Cherney said. "AI is going to be another vector."
Go deeper: Anthropic CEO called to testify on Chinese AI cyberattack
