Oct 17, 2023 - Technology

Social media firms scramble to curb wartime misinformation

Illustration of multiple pencils erasing red speech bubbles against a pattern of white speech bubbles

Illustration: Sarah Grillo/Axios

Big Tech companies that began to walk back content moderation ahead of the 2024 election are now starting to implement new rules in the wake of the Hamas-Israel war.

Why it matters: The level of mis- and disinformation flooding the internet is forcing tech firms to take tougher positions at a time when they are trying to prove they don't kowtow to political pressure.

  • Flashback: Most tech companies cut back on the size of their safety and security teams in the wake of mass layoffs across the sector earlier this year, further complicating content moderation efforts.

Driving the news: Meta on Friday said it's developed a "special operations center" staffed with experts, including fluent Hebrew and Arabic speakers, "to closely monitor and respond to this rapidly evolving situation in real time."

  • It's also working to avoid recommending potentially violative content across Facebook, Instagram and Threads by lowering the threshold at which its tech will start to block recommendations.
  • Given Hamas' threats to broadcast footage of hostages, Meta said its teams will remove any such content and would monitoring livestreams.

Between the lines: Meta and its rival, Google-owned YouTube, have long banned Hamas content, citing the U.S. government's classification of the group as a terrorist organization, but other firms are now starting to take a harder stance on Hamas specifically.

  • A TikTok spokesperson confirmed to Axios that Hamas is banned from its platform. As the Washington Post notes, the company has historically declined to specify which groups it classifies as terrorist organizations.
  • In a statement released Sunday, TikTok said it will be adding more content moderators who speak Arabic and Hebrew.
  • It will also roll out reminders to users searching for certain keywords in Hebrew, Arabic, and English to help make them aware of potential misinformation.
  • YouTube confirmed in a statement to Axios that its policies prohibit content content that praises, promoted or is produced by Hamas.
  • A spokesperson said the firm has removed "tens of thousands of harmful videos and terminated hundreds of channels."

Be smart: European regulators are putting more pressure on tech firms to address mis- and disinformation in the wake of the conflict.

  • The European Commission last week said it is investigating X, formerly Twitter, over allegations that the platform spread disinformation about the war between Hamas and Israel.
  • In response to the request, X CEO Linda Yaccarino sent the EU a letter detailing the firm's efforts to tackle war-related disinformation, including "redistributing resources" and "refocused internal teams."
  • The company said it rolled out new enhancements to its "Community Notes" crowd-sourced fact-checking feature, in addition to removing hundreds of Hamas-affiliated accounts.
  • EU commissioner Thierry Breton also sent letters to major tech firms, including Meta, TikTok and YouTube, reminding them of their obligations to police mis- and disinformation under Europe's new Digital Services Act.

Yes, but: In the U.S., it's nearly impossible for tech firms to be held legally liable for content they host on their platforms.

What to watch: Despite the efforts by many tech firms to combat propaganda and wartime misinformation, huge levels of problematic content continue to appear online, thanks for channels like Telegram that continue to be loosely moderated.

  • Telegram continues to host Hamas channels, which have ballooned in followers and engagement since the war broke out, according to the Atlantic Council's Digital Forensic Research Lab.
  • Content from Telegram often makes its way onto bigger tech platforms that then are forced to try to stop it from spreading further.

Go deeper: Big Tech rolls back misinformation measures ahead of 2024

Go deeper