Exclusive: New Zealand's Ardern drafts AI in the fight against extremist content

- Ryan Heath, author ofAxios AI+

Photo illustration: Sarah Grillo/Axios. Photo: Marla Aufmuth/Getty Images for Pennsylvania Conference For Women
AI companies including OpenAI and Anthropic have signed up to suppress terrorist content, joining the Christchurch Call to Action — a project started by French President Emmanuel Macron and then New Zealand Prime Minister Jacinda Ardern in the wake of the 2019 mass killing at a Christchurch, N.Z. mosque.
Why it matters: Government leaders and companies declared that "without safeguards, it is inevitable advanced AI capabilities will be weaponized by terrorists and violent extremists" — and conceded in a paper obtained by Axios that a "massive amount" of terrorist content has been disseminated since Hamas attacked Israel Oct. 7.
Ardern sat down with Axios in Paris Friday to explain how AI is changing her approach to extremist content.
- "I can see many uses for AI in trying to reduce terrorist activity," Ardern said, pointing to a rapidly growing safety tech industry, low-cost open source solutions, and a collaboration between Microsoft and Tech Against Terrorism to enhance Azure's AI content safety service.
Driving the news: OpenAI and Anthropic on Friday joined the Christchurch Call to Action at a summit in Paris. Discord and Vimeo also joined.
- Social media companies and other online providers became a target of pressure after the Christchurch gunman used Facebook to livestream his crime.
Between the lines: A crisis response protocol developed by Christchurch signatories helped achieve a comprehensive takedown of livestream footage of a 2022 mass shooting at a Buffalo supermarket.
- But while Google has said it has taken down thousands of videos connected to the Hamas-Israel conflict since Oct. 7, Ardern says the protocols have often not been "deployed appropriately" in recent weeks.
- "In some firms we're seeing less content moderation rather than more," she said, linking the problem back to budget and job cuts at big tech firms.
By the numbers: Tech Against Terrorism calculates around 5,000 AI-generated pieces of terror content are created weekly, and users of far-right message board site 4chan have been found to use Bing's AI image generator to create Nazi imagery.
- One big concern is that AI tools could help terror groups bypass tech platforms' automated systems — built on "hash-sharing" techniques — for identifying and removing specific pieces of content.
Context: The original call to action commits signatories to "immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated through livestreaming."
- "I'm not going to get over March 15," Ardern said referring to the Christchurch massacre, calling each new terror attack "the worst possible kind of motivation" to stay involved in the project after she stepped down as prime minister in January.
What they're saying: Ardern knows some of her progressive political allies see tech companies as "complicit in very direct harms," but said, "I'm a techno- pragmatist."
- She thinks public skepticism about AI "is as much about social media as it is about AI" — because social media platforms made big claims about their positive impact on the world while failing to protect the public from harm.
- Ardern is enthusiastic about the openness she says AI companies are providing. "I think it's an acknowledgment that what we've seen in the past [with social media] is harm first, retrofit solutions later," she said.
Open AI president and co-founder Greg Brockman admitted the company had "a lot to learn" about how to suppress terrorist content, but insisted in a statement that "our most important safety strategy is to iteratively deploy our technology as it improves."
- Ardern's answer to both AI startups focused on survival and social media companies cutting back their integrity teams: "Let's crowd-source the red-teaming" of AI models.
- "AI will often be more reliable and faster than human moderators" at spotting extremist content, and will "reduce the human cost of moderation" by limiting workers' exposure to trauma-inducing content, she believes.
Yes, but: Companies and governments that sign on to the Christchurch Call have virtually no reporting requirements.
- That gives companies a free hand to cut content moderation and other integrity resources even while taking credit for signing on to the Call. Google and Microsoft are the only companies included in a new report on the project's progress.
The intrigue: Joining the Call requires governments or companies to commit to "a free, open and secure internet" and to uphold human rights, and for some governments these are hurdles.
- The system "does complicate, in some cases, membership," Ardern said, insisting that non-democracies and companies based in those countries can still apply Call standards, without the hassle of joining.