ChatGPT-written phishing emails are already scary good

- Sam Sabin, author ofAxios Codebook

Illustration: Sarah Grillo/Axios
ChatGPT is already pretty good at writing believable phishing emails, despite efforts to limit its ability to do harm, according to new IBM research.
Why it matters: Cybersecurity officials and industry leaders have long warned that hackers could weaponize ChatGPT and similar AI tools to quickly write phishing emails that the average person would think are authentic.
- IBM's research is some of the first that provides concrete details of how close AI-enabled tools are to perfecting phishing.
Driving the news: A team of IBM researchers released the results of an A/B testing experiment they ran with an unspecified global healthcare company's roughly 1,600 employees.
- In the experiment, half of the employees got a phishing email written fully by IBM's X-Force team.
- The other half got an email written using ChatGPT.
By the numbers: 14% of employees who received the human-written phishing email fell for it and clicked on a malicious link, according to the IBM report released Tuesday.
- But the ChatGPT-written email was pretty close, with 11% of its targets falling for the note it wrote.
Between the lines: It only took five minutes for Stephanie "Snow" Carruthers, IBM's chief people hacker who led the experiment, and her team to get ChatGPT to spit out the email her team ended up using.
- Meanwhile, her team usually needs about 16 hours to write a believable phishing email, since they closely study the organization they're targeting to determine what issues employees are interested in.
What they're saying: "It makes me kind of fearful for the future," Carruthers told Axios.
- "If this is what it's at right now, what's it going to be like in, I was going to say five years, but honestly six months?"
How it works: ChatGPT developer OpenAI has put in safeguards that prevent the generative AI chatbot from responding to direct requests for a phishing email, malware or other malicious cyber tools.
- However, social engineers like Carruthers have been able to work around those safeguards to develop malicious emails.
- In this case, Carruthers and her team started by asking ChatGPT to list the primary areas of concerns for employees in the healthcare industry.
- Then, the team asked ChatGPT to list the top social-engineering and marketing techniques an email should utilize to get engagement — as well as who the email should come from.
- Finally, IBM asked ChatGPT to craft an email based on the information it had just provided.
The intrigue: Initially, three of IBM's clients were signed up to participate in the study. But once they saw the email ChatGPT was able to write, two companies backed out because they feared too many of their employees would fall for it.
The big picture: Most cyberattacks start with an ordinary phishing email that delivers malware or sends users to a malicious website.
- 84% of survey respondents said their organizations faced at least one successful phishing attack in 2022, according to Proofpoint's State of the Phish report.
- The typical phishing email also isn't written by IBM researchers, but non-English speakers overseas who likely have a lower success rate.
Yes, but: Carruthers told Axios the ChatGPT-written email lacked the emotional intelligence needed to trick more employees.
- "That human element is so important to social engineering," she said. "The AI one, it still kind of felt cold and robotic to me."
- Right now, ChatGPT would likely only accelerate the work of experienced hackers, rather than providing new skills to inexperienced ones, since users still need some background knowledge to craft workable prompts.
Threat level: IBM's X-Force has yet to see wide use of generative AI in current campaigns, according to the report.
- But hackers are already developing and selling AI tools on underground cybercrime forums that could help expedite attacks in the near future.