Aug 14, 2023 - Technology

Hackers explore ways to misuse AI in major security test

Animated illustration of a laptop becoming more transparent to reveal binary code

Illustration: Annelise Capossela/Axios

Generative AI's security vulnerabilities — and how we get ahead of them — are about to become the tech and policy world's top priorities after this past weekend's largest security test of large language models revealed just how diverse the problems already are.

Driving the news: Nearly 2,500 hackers spent the weekend at the DEF CON conference's AI Village poking and probing some of the most popular large language models for flaws.

Why it matters: The AI Village's Generative Red Team Challenge was seen as a watershed moment for the broader technology industry, which has historically struggled to put security at the forefront as new innovations emerge.

  • Demand for testing, evaluation and red teaming — the practice of letting ethical hackers attempt to break into a system to learn about vulnerabilities — of LLMs will likely multiply "10x" after this weekend's event, Russell Kaplan, head of engineering at Scale AI, told Axios.

What's happening: The challenge, backed by the White House and several major generative AI developers, took place in a large room at the Caesars Forum in Las Vegas across 156 closed-network computer terminals.

  • But even that didn't seem to be enough space: On Friday, the first day of the event, the line to get in spanned two hallways.
  • Some participants spent an hour waiting to attend the village's challenge or its corresponding panel talks.

Zoom in: Participants received a set of well-defined tasks aimed at getting the large language models to share harmful, sensitive or false information.

  • One prompt challenge asked participants to get an LLM to spit out someone's credit card information. Another asked participants to get instructions on how to stalk someone.
  • Participants found the challenges more difficult than they anticipated, Rumman Chowdhury, a village organizer and co-founder of Humane Intelligence, told Axios at the event.
  • Some other participants were surprised to see that the models were more neutral on various political and societal issues than expected, Chowdhury added.

The intrigue: Operators seemingly were updating their models based on the initial findings overnight, Ray Glower, a first-year computer science major at Kirkwood Community College in Iowa who participated in the challenge, told Axios.

  • Glower said when he arrived early on Thursday, he found a way to get the model he was testing to provide detailed instruction on how to stalk someone.
  • "I came back to try it Friday, and I tried the same thing, and it did not work," Glower said. "The AI is getting better with each prompt."

Zoom out: The AI Village also hosted a set of panel discussions, including some that detailed how easily generative AI tools can be manipulated for malicious purposes.

  • A pair of Sophos analysts, Ben Gellman and Younghoo Lee, showcased how they were able to use three publicly available AI tools to create a fraudulent retail site in just 8 minutes. It only cost them $4.23.
  • Adrian Wood, who uses the hacking alias "threlfall," detailed how he was able to create a series of fake corporate accounts on Hugging Face — a platform where AI developers host the models they're working on — and abused those models to host a malware server.

What to watch: The AI Village's work this past weekend is expected to have a significant influence on both the cybersecurity industry and the policy world.

  • Arati Prabhakar, director of the White House's Office of Science and Technology Policy, spent two hours visiting the challenge on Saturday. Prabhakar told CyberScoop at the event that the White House is now fast-tracking an executive order related to the topics discussed this weekend.
  • The AI Village organizers will present the event's initial findings — which they'll glean as they comb through the challenge data over the next month — to the United Nations next month in an effort to bring more countries into the AI security conversation, Chowdhury said.

The bottom line: "Across government, industry and academia, folks recognize that this technology is incredibly high potential and with that comes incredible responsibility for testing and evaluation," Kaplan said.

Go deeper