Feb 14, 2024 - Technology

Nation-state hackers are already using AI chatbots

Illustration of a keyboard with a robot-shaped key in the middle

Illustration: Sarah Grillo/Axios

Hackers connected to the Chinese, Iranian, North Korean and Russian governments are already using AI chatbots to write phishing emails and study potential targets, according to new research from Microsoft and OpenAI.

Why it matters: Government officials and cybersecurity executives have been warning that ChatGPT and similar tools could speed up hackers' attacks. Now that reality is here.

What's happening: Microsoft and OpenAI shared new instances in a report released today where they've seen nation-state hacking teams using large-language models (LLMs).

  • The report says that Russian military-linked Forest Blizzard, also known as Fancy Bear, uses LLMs to research various satellite and radar technologies that could be related to military operations in Ukraine.
  • North Korean group Emerald Sleet (aka Kimsuky) has used LLMs to help craft spear-phishing emails to academics and other experts, and to research which think tanks it should target.
  • Iran-linked Crimson Sandstorm, or Imperial Kitten, has used LLMs to generate code snippets, research how to disable antivirus systems and to help craft phishing emails.
  • And two China-linked groups — Charcoal Typhoon and Salmon Typhoon — have used LLMs to support new hacking tool developments, generate believable phishing messages and source information about high-profile individuals.

Yes, but: The companies said they have not seen evidence of AI-enabled cyberattacks — which would involve someone programming a large-language model to carry out an attack on its own.

Between the lines: Many of these examples are in line with experts' predictions that AI tools would simply speed up hackers' attacks by helping to write phishing emails and malware.

  • An AI-enabled cyber war probably won't be here anytime soon.

What we're watching: Cyber defenders also promise that AI tools can be used to sharpen their threat detection and response plans.

Go deeper