May 19, 2021 - Technology

The disinformation threat from text-generating AI

Illustration of fountain pen writing 1 and 0

Illustration: Eniola Odetunde/Axios

A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.

Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts — and humans would struggle to know when they're being lied to.

How it works: Text-generating models like OpenAI's leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.

  • In their new report released this morning, researchers from Georgetown's Center for Security and Emerging Technology (CSET) examined how GPT-3 might be used to turbocharge disinformation campaigns like the one carried out by Russia's Internet Research Agency (IRA) during the 2016 election.

What they found: While "no currently existing autonomous system could replace the entirety of the IRA," algorithmically based tech paired with experienced human operators produces results that are nothing less than frightening.

  • Like many other automation and AI technologies, GPT-3's real power is in its ability to scale, says Ben Buchanan, director of the CyberAI Project at CSET and a co-author of the report.
  • GPT-3 "lets operators try a bunch of variants on a message and see what sticks," he says. "The scale might lead to more effective feedback loops and iterations."
  • "A future disinformation campaign may, for example, involve senior-level managers giving instructions to a machine instead of overseeing teams of human content creators," the authors write. "The managers would review the system’s outputs and select the most promising results for distribution."

What to watch: While OpenAI has tightly restricted access to GPT-3, Buchanan notes that it's "likely that open source versions of GPT-3 will eventually emerge, greatly complicating any efforts to lock the technology down."

  • Researchers at Huawei have already created a Chinese-language equivalent at the scale of GPT-3, and plan to provide it freely to all.
  • Because identifying the latest computer-generated text is difficult, Buchanan says the best defense is for platforms to "crack down on the fake accounts" used to disseminate misinformation.

The bottom line: Like much of social media more broadly, the report's authors write that systems like GPT-3 seem "more adept as fabulists than as staid truth-tellers."

Go deeper