Chinese law enforcement tried using ChatGPT to discredit Japan's PM, OpenAI says
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Annelise Capossela/Axios
OpenAI has banned a ChatGPT account linked to Chinese law enforcement which tried to use the AI chatbot to undermine support for Japan's prime minister, the company said in a report Wednesday.
Why it matters: The operation was unusual and "revealed a lot about China's strategy for covert influence operations and transnational repression," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters.
- "These cyber special operations are large scale, resource intensive and sustained," Nimmo added.
Driving the news: An individual tied to Chinese law enforcement used ChatGPT to continuously edit and polish updates to reports about their so-called "cyber special operations."
- The updates suggest that Chinese law enforcement has built and is expanding a strategy to "suppress dissent and silence critics both online and offline" around the world using hundreds of people, thousands of fake accounts and locally deployed AI models, according to the report.
- The updates also referenced plans for a large-scale influence operation partially powered by Chinese open-weight AI models.
What they're saying: "It's not just digital, it's not just about trolling, it's industrialized," Nimmo told reporters. "It's about trying to hit critics of the (Chinese Communist Party) with everything, everywhere, all at once."
Zoom in: In mid-October, the user attempted to use ChatGPT to design and refine a campaign aimed at discrediting Sanae Takaichi — who won a landslide election victory last month — after she publicly criticized the state of human rights in Inner Mongolia.
- Takaichi also infuriated Beijing last year when she suggested that Japan might defend Taiwan in the event of a Chinese invasion.
- The plan hinged on six elements, including posting and amplifying negative comments about Takaichi on social media; sending complaints to Japanese politicians using fake email accounts posing as foreign residents; and accusing Takaichi of far-right leanings.
Yes, but: ChatGPT refused to help the individual refine the campaign.
- Instead, the user then returned a few weeks later to update a report that indicated the campaign went ahead, likely using locally hosted Chinese AI models, according to OpenAI.
- That update also suggested the user included a set of hashtags in its social media operations. OpenAI researchers traced those to posts on X, Blogspot and Pixiv, a popular online Japanese community for artists.
The big picture: Many of the other influence operations outlined in OpenAI's report reflect the same old tools and tactics that influence operators typically use in online campaigns — just supercharged with AI.
- ChatGPT helped Cambodia-based scammers create marketing materials for a fake online dating service used in romance scams.
- The chatbot also helped Russian-based actors translate social media comments in Spanish for an operation targeting Argentina.
What to watch: Whether scammers and nation-state operators change their tactics after OpenAI made their techniques and tells known publicly.
Go deeper: Foreign disinformation enters AI-powered era
