May 25, 2023 - Technology

How generative AI could generate more antisemitism

Illustration of a star of David being glitched and distorted.

Illustration: Aïda Amer/Axios

One more concern to add to the long list of fears stoked by the rise of generative AI: Experts say it could incite more antisemitism in the U.S. at an already fraught time for American Jews and other groups targeted by hate.

Driving the news: Generative AI chatbots like OpenAI's ChatGPT and Google's Bard, which respond to prompts with written responses pulled from web data, have sparked a frenzy over the capabilities and potential dangers of AI's rapid technological advancement.

How it works: Because AI models learn to complete sentences by analyzing enormous quantities of text created by people, usually on the internet, they pick up bias embedded in both the digital environment and broader society.

  • Antisemitic incidents reached a new high in the U.S. in 2022, the Anti-Defamation League said in a March report, citing a 36% rise from 2021. That includes harassment, vandalism and assaults.

What they're saying: "We are very concerned about how these models are being created," Anti-Defamation League CEO Jonathan Greenblatt told Axios' Russell Contreras.

  • Greenblatt pointed to how quickly Microsoft's Tay, a 2016 AI chatbot, was shut down after it spewed a series of lewd and pro-Nazi tweets soon after launch — a stance it borrowed from other Twitter users.
  • He said while he's appreciative of some initial improvements made to ChatGPT, tech companies need to be open and transparent about data sets and algorithms. Calls for transparency are growing; mostly, companies don't disclose what data is being used to train these models.

The big picture: Experts say that the dynamic of AI-amplified antisemitism can be one good bellwether or proxy for assessing the much wider varieties of bias and hate that the technology can spread on the basis of race, gender or LGBTQ identification, religion, immigration status, and other factors.

By the numbers: An ADL survey of 1,007 adults in the U.S. found that 84% are worried generative AI tools will increase the spread of misleading or false information.

  • An April report from the Center for Countering Digital Hate found that when prompted about hot-button topics around hate, misinformation and conspiracies, Google's Bard chatbot produced text with misinformation 78 out of 100 times.
  • Google said at the time Bard is an "early experiment" prone to "sometimes [giving] inaccurate or inappropriate information."

The intrigue: CyberWell, an Israeli nonprofit platform that monitors social media antisemitism in real time, is working on "high-integrity data sets" identifying online antisemitism based on keywords and other specifics that can be used in the future to train generative AI and weed such content out, CEO Tal-Or Cohen Montemayor told Axios.

  • "I think people have understood since the rollout of social media, the social harms that are caused and the direct connection we see between online hate and hate crimes in real life," Cohen Montemayor said. "We understand we want a more ethical result when it comes to generative AI."
  • A March report from CyberWell showed wide disparities in how current major social media platforms remove such content, which is against the company's terms of service but nonetheless proliferates online.
  • "Antisemitism is one of the most nuanced and layered forms of hate speech out there because of the history and modern manifestations of it," Cohen said. "But it is one of the best forms of hate speech to start training generative AI on, because it's so nuanced."

Go deeper: Antisemitism is especially hard to tackle in generative AI because it takes many forms. It can be a picture, a phrase, a veiled generalization, basic misinformation or derogatory language.

  • Generative AI programs are "trained on masses and masses of publicly available material on the internet, including social media content," Callum Hood, head of research for the Center for Countering Digital Hate, told Axios.
  • "You could knock out websites like Stormfront and so on, but we know that antisemitic conspiracy theories occasionally rear their ugly head in mainstream media."
  • He said: "We know the tech companies that have ingested this stuff to train their AIs have not done the best job of cleaning misinformation out of the training material ... and then that stuff ends up in the answers."

Most AI developers right now say they are intervening to try to root out bias — both by removing it from training data sets up front, and by adding rules and guardrails to the chatbot later.

  • But some companies and leaders in the field — including OpenAI co-founder Elon Musk, who has since broken with the company — criticize these efforts as being too "woke" and prefer what they claim is a "free speech" approach.
  • Given this political disagreement within the industry, even if big providers try hard to root out bias and hate speech, at least some widely available AI programs are likely to end up promoting it.
Go deeper