Cybercriminals won't become AI experts overnight
- Sam Sabin, author of Axios Codebook

Illustration: Allie Carl/Axios
The likelihood of cybercriminals investing energy and money into incorporating artificial intelligence into their schemes anytime soon is vastly overblown.
The big picture: If anything, cyber defenders will see the more immediate benefits of AI — it can help them block the run-of-the-mill security holes that criminals keep exploiting.
- Cybercriminals are often looking for the simplest, quickest schemes to make money, and bringing today's AI into play doesn't fit that bill, John Dwyer, head of research at IBM Security X-Force, told Axios.
Why it matters: Since the current AI wave started, article after article has warned about the ways the technology will help malicious hackers develop more sophisticated and harder-to-detect schemes.
- But a lot of that requires investments of time and money — something that opportunistic cybercriminals usually lack, experts told Axios.
Between the lines: Even getting generative AI to write malicious code requires expert understanding, as well as several prompts to get malware up to snuff.
- To get true value out of AI's most advanced large language models, cybercriminals would need to train the models themselves — and that can be a problem considering most hackers don't also double up as data scientists, Chester Wisniewski, field CTO of applied research at Sophos, told Axios.
- “It would need to get to the point in which the revenue brought in by malicious AI would be high enough for it to be worth it,” Dwyer said.
The intrigue: Major cybersecurity firms are already starting to play around with ChatGPT and other AI tools to figure out what they'll be up against — and how to improve their products before cybercriminals invest in the technology.
- Palo Alto Networks is one of them, Michael Sikorski, CTO of the company's threat intelligence team, told Axios. Because AI tools are built on already existing tools, most of the malicious code they're spitting out is repurposed from previous attacks, he added.
- "Maybe it could build things faster than they already are, but it’s not going to be something that’s totally novel," he said. "It’s not trained on how to [write] a zero-day or find a vulnerability or how to exploit a vulnerability."
- Sandra Joyce, executive vice president and head of global intelligence at Mandiant, told reporters at the RSA Conference in San Francisco last week that the firm's products now have AI baked in to help distill threat intelligence and investigate ongoing incidents.
What they're saying: "The upside is the good guys do have data scientists, and many of us do spend millions of dollars in the cloud on GPUs," Wisniewski said.
- "We can train it to do some pretty incredible things to enable less-skilled security practitioners to up their game in being able to analyze data more quickly, more accurately and that kind of thing," he added.
Zoom out: Cybercriminals are still seeing success from simple techniques, such as getting people to respond to phishing emails and scam texts or using stolen login credentials sold on the dark web to hijack accounts.
- 37% of the incidents last year started with hackers targeting a software vulnerability, according to a Sophos report last week — with the vast majority of those cases involving already publicly known flaws that companies failed to patch.
- 54% of incidents start with hackers stealing someone's password, according to a report from the Ponemon Institute.
Yes, but: It's too early to judge the impact of generative AI on cybersecurity, and senior government officials are still on edge to see whether cybercriminals will take the time to invest and learn the technology.
- "I'll tell you, buckle up," Rob Joyce, director of cybersecurity at the National Security Agency, said during the RSA Conference. "Next year, if we're talking a similar year in review, we'll have a bunch of examples of where it's been used and where's it's succeeded."
Sign up for Axios’ cybersecurity newsletter Codebook here