Hackers are already abusing ChatGPT to write malware
- Sam Sabin, author of Axios Codebook

Illustration: Sarah Grillo/Axios
Malicious hackers are already using the flashy new AI chatbot, ChatGPT, to create new low-level cyber tools, including malware and encryption scripts, according to a recent report.
Why it matters: Security experts have been warning that OpenAI's ChatGPT tool could help cybercriminals speed up their attacks, and it all happened fast.
Driving the news: Researchers at Check Point Research said Friday they've spotted malicious hackers using ChatGPT to develop basic hacking tools.
- The report details three instances in December where hackers discussed ways to use ChatGPT to write malware, create data encryption tools and write code creating new dark web marketplaces.
The big picture: Hackers are always looking for ways to save time and speed up their attacks — and ChatGPT's AI-driven responses tend to provide a pretty good starting spot for most hackers writing malware and phishing emails.
Details: According to the report, the hackers have so far only created basic data stealing and encryption tools.
- One member noted in the forums that OpenAI's tool gave him a "nice [helping] hand to finish the script with a nice scope," per the report.
- Another "tech-oriented" hacker was also spotted teaching "less technically capable cybercriminals how to utilize ChatGPT for malicious purposes."
- Check Point noted that the data encryption tool created could easily be turned into ransomware once a few minor problems were fixed.
Yes, but: It's still too soon to say how much cybercriminals will lean on ChatGPT in the long run — or for how much longer they'll be able to abuse the platform.
- OpenAI has previously said ChatGPT is a research preview and the organization is constantly looking at ways to improve the product to avoid abuse.
Sign up for Axios’ cybersecurity newsletter Codebook here.