Axios AI+

February 21, 2025
If you're in D.C.: Join Axios on Wednesday, Feb. 26, at 6pm ET for an exclusive reception featuring a deeper look at Netflix's new "Zero Day" series, followed by discussions on U.S. cybersecurity with Rep. Chrissy Houlahan (D-Pa.), former CISA director and SentinelOne chief intelligence and public policy officer Chris Krebs and more. RSVP here.
Today's AI+ is 932 words, a 3.5-minute read.
1 big thing: OpenAI disrupts Chinese influence campaigns
OpenAI spotted and disrupted two uses of its AI tools as part of broader Chinese influence campaigns, including one designed to spread Spanish-language anti-American disinformation, the company said.
Why it matters: AI's potential to supercharge disinformation and speed the work of nation state-backed cyberattacks is steadily moving from scary theory to complex reality.
Driving the news: OpenAI published its latest threat report on Friday, identifying several examples of efforts to misuse ChatGPT and its other tools.
- One campaign, which OpenAI labeled "sponsored discontent," used ChatGPT accounts to generate both English-language comments attacking Chinese dissident Cai Xia and Spanish-language news articles critical of the U.S.
- Some of the short comments were posted on X, while the articles found their way into a variety of Latin American news sites, in some cases as sponsored content.
What they're saying: "As far as we know this is the first time a Chinese influence operation has been found translating long-form articles into Spanish and publishing them in Latin America," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said in a briefing with reporters.
- "Without our view of their use of AI, we would not have been able to make the connection between the tweets and web articles."
Another campaign, which OpenAI dubbed "peer review," consisted of accounts using ChatGPT to generate marketing materials for a social media listening tool that its creators claimed had been used to send reports of protests to the Chinese security services.
- OpenAI banned the related accounts, saying they violated company policies that "prohibit the use of AI for communications surveillance, or unauthorized monitoring of individuals."
- Other campaigns called out in the latest report include several scams as well as influence campaigns tied to North Korea and Iran and an effort to influence an election in Ghana.
Between the lines: OpenAI, which started publishing threat reports last year, says that it's doing so "to inform efforts to understand and prepare for how the P.R.C. or other authoritarian regimes may try to leverage AI against the U.S. and allied countries, as well as their own people."
- As the new report shows, AI tools can be used at various points in a disinformation campaign, sometimes revealing other aspects of a group's techniques, aims and weaknesses.
- "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our models," Nimmo said.
Yes, but: As open source tools become more powerful — and are able to be run locally — threat actors may use them for more of their tasks, making it harder for such efforts to be detected.
- In the "peer review" case, for example, OpenAI noticed that while ChatGPT was used to edit and debug some code, there were also references to the use of open-source models, including DeepSeek and a version of Meta's Llama 3.1.
"This was a really interesting case where it looks like a threat actor at least mentions the use of a bunch of different models," Nimmo said, noting it's not clear what motivated the use of so many tools.
- "Maybe they wanted to break up their signal," he said. "There's a bunch of different reasons that some of this could be going on."
The bottom line: As AI continues to ratchet up attackers' capabilities, AI providers are having to put more effort into tracking and foiling them — often with the help of their own tools.
2. Open source AI backers launch ad campaign
The Open-Source AI Foundation has launched a $10 million ad campaign aimed at convincing policymakers and others of the benefits of such technology, Axios has learned.
Why it matters: There is a spirited debate in both technology and policy circles as to whether open source AI makes the technology safer or less secure.
State of play: The foundation, a new effort led by Cambridge Analytica whistleblower Brittany Kaiser, aims to convince lawmakers that civilian agencies should abandon work with closed source AI companies and adopt open source technologies instead.
- Asked whether the foundation had received backing from tech giants, Kaiser told Axios in a statement, "Not currently — though some of the big five have already expressed interest, and we invite anyone who shares our mission to join us."
What they're saying: "All closed-source AI contracts with civilian agencies should be terminated immediately. Government AI should be built openly, with transparency and auditability at its core," Joe Merrill, CEO of OpenTeams, said in a statement. "This allows the models and training to be publicly scrutinized and verified."
- "Large language models are built on open frameworks and are trained on public data from the web," Eliza Labs CEO Shaw Walters said in a statement. "Closed models are fine for other use cases. But not for public service."
- "Open-source AI allows the public to audit and verify algorithms, enhancing trust in government technology," Quansight CEO Travis Oliphant said in a statement. "It is also more secure because any attacks or exploits can be identified and remediated, while the models and their training can be audited to minimize bias."
The other side: Critics of open source AI argue it is inherently hard to limit and control and will enable bad actors and foreign rivals of the U.S. to evade any limits or protections.
3. Training data
- OpenAI said ChatGPT now has 400 million weekly active users. (CNBC)
- Meanwhile, GPT 4.5 could arrive as soon as next week. (The Verge)
4. + This
I was today years old when I learned that the reason that second is abbreviated 2nd, third as 3rd and fourth as 4th, etc., is because those are the last two letters of the full word.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+



/2025/02/21/1740101264737.gif?w=3840)