Facebook says very few people actually see hate speech on its platform
Facebook said it took action on 22.1 million pieces of hate speech content to its platform globally last quarter and about 6.5 million pieces of hate speech content on Instagram. On both platforms, it says about 95% of that hate speech was proactively identified and stopped by artificial intelligence.
Details: In total, the company says that there are 10–11 views of hate speech for every 10,000 views of content uploaded to the site globally — or .1%. It calls this metric — how much problematic content it doesn't catch compared to how much is reported and removed — "prevalence."
Why it matters: The company is revealing hard numbers about how much hate speech is on its platform for the first time, in an attempt to showcase how much better it has gotten at identifying and removing hate speech quickly.
- Facebook's vice president of global policy Monika Bickert says that the new prevalence metric will be used moving forward to track its effectiveness in removing hate speech.
- She suggested that this metric could be used as a standard for the broader tech industry and could be considered by policymakers as a way to hold tech companies accountable when considering changes to Section 230, a U.S. law that serves as a content liability shield for tech companies.
Details: For context, Facebook says it takes down more hate speech than all other types of problematic content aside from nudity. It takes down far fewer pieces of problematic content include things like harassment, suicide and terrorism.
- Some types of problematic content are much more subjective, like bullying, so the company takes less automated action on that type of content.
- Hate speech tends to be the most appealed type of content by users once removed.
- The company restored more content last quarter than in previous quarters. The company's VP of Integrity Guy Rosen said that was in part because the number of restored pieces of content tends to rise with the number of removed pieces.
The big picture: Facebook has made changes to its hate speech policies in the past few months to curb hate speech ahead of the election.
- After years of criticism, the company expanded its hate speech policies to ban any content that "denies or distorts the Holocaust."
- The company banned all accounts, pages and groups representing the conspiracy theory QAnon from its platforms in October.
- Facebook has faced boycotts and calls for regulation over the way it polices hate speech on its platforms specifically.
Yes, but: The company still faces criticism for what some consider to be policies that don't go far enough in policing hate speech.
- Last week, the CEO Mark Zuckerberg said that Facebook wouldn't ban Steve Bannon from Facebook (he doesn't currently have his own Facebook profile) for comments he made that were uploaded to Facebook about threatening to behead prominent U.S. officials.
- Bickert clarified Zuckerberg's position, saying that his comments did violate the company's policies and videos containing that content were all blocked, including the Steve Bannon-branded page that posted the videos, but that the video upload itself wouldn't ban Bannon from the platform.