OpenAI report details election interference efforts, hoaxes
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Brendan Lynch/Axios
OpenAI has seen a continued stream of attempts to use AI as part of political misinformation campaigns on social media, but said the effort that spread widest was a hoax that only appeared to use its services.
Why it matters: A new report from the company, released on Wednesday, highlights the continued use of generative AI by foreign adversaries of the U.S. — but shows that, at least for this year's election, the impact appears to be modest.
Driving the news: OpenAI said it saw and disrupted a number of political and election-related influence campaigns in recent months, ranging from one-off efforts typed into ChatGPT to larger, more systematic projects.
- As it has noted in past reports, OpenAI said its tools appear to be used as an intermediate step in broader campaigns rather than in attempts to do end-to-end work. In one case, for example, a Russian influence operation used images generated by Dall-E in an attempt to make its messages more eye-catching.
- "The threat actors look like they're still experimenting with different approaches to AI, but we haven't seen evidence of this leading to meaningful breakthroughs in their ability to build higher audiences," OpenAI principal investigator Ben Nimmo said in a briefing with reporters.
- The company also detected attempts to gain access to the credentials of OpenAI employees, including by a China-based adversary that aimed to access workers' email accounts.
Yes, but: The AI-related campaign that spread most widely on social media was one that only appeared to use OpenAI's systems.
- "It was a false claim that seemed to show Russian trolls using our model but forgetting to pay for it," Nimmo said. "In fact, that post wasn't generated using our models at all."
The intrigue: OpenAI says it has built additional AI tools in recent months that are helping it to more quickly detect and analyze potentially harmful activity.
- "These tools have allowed us to compress some of the analytical steps we take from days down to minutes, and some of the operations that we disrupted in the past couple of months were discovered thanks to our use of AI," Nimmo said.
