Report: Facebook sidelined hate speech study
Facebook researchers last year assembled a study of the "worst of the worst" hate speech content on the company's platform and made recommendations for how to curtail it, according to a Washington Post report published on Sunday.
But Facebook officials, afraid "the new system would tilt the scales by protecting some vulnerable groups over others" and concerned about "the potential for backlash from 'conservative partners,'" decided against implementing those recommendations, according to the Post.
Details: The researchers asked thousands of Facebook users in 2019 to rank a series of abusive posts, and respondents, even white conservatives, consistently found those aimed at minority groups to be the most harmful.
- Examples of the "worst of the worst" postings included obscene sexist and racist comments aimed at the progressive Democratic representatives known as "the Squad."
- Yet, the report found, Facebook's automated hate-speech detection algorithm tended to "aggressively [detect] comments denigrating white people more than attacks on every other group," per the Post.
The researchers recommended retuning the system to flag and delete hate speech targeted at Black, Jewish, LGBTQ, Muslim and mixed-race people, since the survey found those posts to be "the worst of the worst."
Yes, but: Facebook leadership rejected the plan and stuck to the company's "politically neutral" stance.
- In a statement to the Post, Facebook spokesman Andy Stone said the company had implemented parts of the report's recommendations, but after a rigorous internal discussion about these difficult questions, we did not implement all parts as doing so would have actually meant fewer automated removals of hate speech."
The big picture: Facebook's content moderation policies have been in the media spotlight after a trove of documents from whistleblower Frances Haugen chronicled a variety of harms to users.