Aug 11, 2020 - Technology

Facebook steps up hate speech crackdown, removing 22.5 million posts in Q2

Zuckerberg
Mark Zuckerberg. Photo: Tobias Hase/picture alliance via Getty Images

Facebook took down 22.5 million posts for hate speech during the second quarter of this year, more than ten times the number it removed in the same quarter last year and more than twice the number removed in the first quarter of 2020.

Why it matters: The company is facing enormous pressure from the advertising and civil rights communities to address hate speech on its platforms. Last month, civil rights groups initiated a Facebook ad boycott that was joined by over 1,000 advertisers.

Driving the news: The company said Tuesday it's getting better at policing hate speech, noting in its sixth "Community Standards Enforcement Report" that 95% of all hate speech on its platforms was caught by automated systems before it was ever seen by users — up from 89% during the first quarter of 2020.

  • Facebook attributed the increase in action against hate speech content to the expansion of some of its automation technology in Spanish, Arabic and Indonesian, as well as improvements to its English detection technology in Q1.
  • The company said improvements to its technology also helped it increase the amount of terrorism content that it was able to take action on.

Facebook also announced updated policies around certain kinds of implicit hate speech. It said that content depicting blackface or stereotypes about Jewish people controlling the world will now be considered violations of its hate speech policies.

Between the lines: The company said it also removed over 7 million instances of harmful COVID-19 misinformation from Facebook and Instagram, and slapped warning labels on about 98 million pieces of COVID-19 misinformation on Facebook.

The big picture: Facebook said that the absence of content reviewers in Q1 due to the pandemic was a reminder that filtering for bad content isn't a binary task — it must be done using humans and automation in tandem.

  • The company acknowledged that part of the reason it was able to better police misinformation in the second quarter was because content moderators were permitted to work remotely in the second quarter. The company initially sent its content reviewers home in March due to the pandemic.
  • As a result of having fewer content reviewers, the company said it took less action on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. It also said it was able to address fewer content appeals.

What's next: Facebook VP of Integrity Guy Rosen said the company is looking to have an independent third-party audit to validate its internal numbers around hate speech and content moderation. He said Facebook is looking for the audit conductor now and is looking to do an audit in 2021.

  • Facebook VP of Content Policy Monika Bickert said the company is also creating a "Diversity Advisory Council" to help inform its content enforcement standards.
Go deeper