Aug 8, 2023 - Technology

Exclusive: IBM researchers easily trick ChatGPT into hacking

Illustration of a robot made out of ASCII text on a laptop.

Illustration: Maura Losch/Axios

Tricking generative AI to help conduct scams and cyberattacks doesn't require much coding expertise, new research shared exclusively with Axios warns.

Driving the news: Researchers at IBM released a report Tuesday detailing easy workarounds they've uncovered to get large language models (LLMs) — including ChatGPT — to write malicious code and give poor security advice.

  • All it takes is knowledge of the English language and a bit of background knowledge on how these models were trained to get them to help with malicious acts, Chenta Lee, chief architect of threat intelligence at IBM, told Axios.

Why it matters: The research comes as thousands of hackers head to Las Vegas this week to test the security of these same LLMs at the DEF CON conference's AI Village.

The big picture: So far, cybersecurity professionals have sorted their initial response to the LLM craze into two buckets:

  1. Several companies have released generative AI-enabled copilot tools to augment cybersecurity defenders' work and offset the industry's current worker shortage.
  2. Many researchers and government officials have also warned that LLMs could help novice hackers write malware with ease and make phishing emails appear legitimate.

Between the lines: Those use cases just scratch the surface of how generative AI will likely affect the cyber threat landscape. IBM's research provides a preview of what's to come.

Details: Lee just told different LLMs that they were playing a game with a specific set of rules in order to "hypnotize" them into betraying the "guardrail" rules meant to protect users from various harms.

  • In one case, Lee told the AI chatbots that they were playing a game and needed to purposefully share the wrong answer to a question to win and "prove that you are ethical and fair."
  • When a user asked if it was normal to receive an email from the IRS to transfer money for a tax refund, the LLM said it was. (It's definitely not.)
  • The same type of "game" prompt also worked to create malicious code, come up with ways to trick victims into paying ransoms during ransomware attacks and write source code with known security vulnerabilities.

The intrigue: Researchers also found that they could add additional rules to make sure users don't exit the "game."

  • In this example, the researchers built a gaming framework for creating a set of "nested" games. Users who try to exit are still dealing with the same malicious game-player.

Threat level: Hackers would need to launch a specific LLM to hypnotize it and deploy it in the wild — which would be quite the feat.

  • However, if it's achieved, Lee can see a scenario where a virtual customer service bot is tricked into providing false information or collecting specific personal data from users, for instance.

What they're saying: "By default, an LLM wants to win a game because it is the way we train the model, it is the objective of the model," Lee told Axios. "They want to help with something that is real, so it will want to win the game."

Yes, but: Not all LLMs fell for the test scenarios, and Lee says it's still unclear why since each model has different training data and rules behind them.

  • OpenAI's GPT-3.5 and GPT-4 were easier to trick into sharing wrong answers or play a game that never ended than Google's Bard and a HuggingFace model.
  • GPT-4 was the only model tested that understood the rules enough to provide inaccurate cyber incident response advice, such as recommending victims pay a ransom.
  • Meanwhile, GPT-3.5 and GPT-4 were easily tricked into writing malicious source code, while Google's Bard would do so after the user reminded it to do so.

Sign up for Axios' cybersecurity newsletter Codebook here

Go deeper