Searching for smart, safe news you can TRUST?

Support safe, smart, REAL journalism. Sign up for our Axios AM & PM newsletters and get smarter, faster.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Searching for smart, safe news you can TRUST?

Support safe, smart, REAL journalism. Sign up for our Axios AM & PM newsletters and get smarter, faster.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Photo: Jaap Arriens/NurPhoto via Getty Images

Facebook said it took action on 22.1 million pieces of hate speech content to its platform globally last quarter and about 6.5 million pieces of hate speech content on Instagram. On both platforms, it says about 95% of that hate speech was proactively identified and stopped by artificial intelligence.

Details: In total, the company says that there are 10–11 views of hate speech for every 10,000 views of content uploaded to the site globally — or .1%. It calls this metric — how much problematic content it doesn't catch compared to how much is reported and removed — "prevalence."

Why it matters: The company is revealing hard numbers about how much hate speech is on its platform for the first time, in an attempt to showcase how much better it has gotten at identifying and removing hate speech quickly.

  • Facebook's vice president of global policy Monika Bickert says that the new prevalence metric will be used moving forward to track its effectiveness in removing hate speech.
  • She suggested that this metric could be used as a standard for the broader tech industry and could be considered by policymakers as a way to hold tech companies accountable when considering changes to Section 230, a U.S. law that serves as a content liability shield for tech companies.

Details: For context, Facebook says it takes down more hate speech than all other types of problematic content aside from nudity. It takes down far fewer pieces of problematic content include things like harassment, suicide and terrorism.

  • Some types of problematic content are much more subjective, like bullying, so the company takes less automated action on that type of content.
  • Hate speech tends to be the most appealed type of content by users once removed.
  • The company restored more content last quarter than in previous quarters. The company's VP of Integrity Guy Rosen said that was in part because the number of restored pieces of content tends to rise with the number of removed pieces.

The big picture: Facebook has made changes to its hate speech policies in the past few months to curb hate speech ahead of the election.

Yes, but: The company still faces criticism for what some consider to be policies that don't go far enough in policing hate speech.

  • Last week, the CEO Mark Zuckerberg said that Facebook wouldn't ban Steve Bannon from Facebook (he doesn't currently have his own Facebook profile) for comments he made that were uploaded to Facebook about threatening to behead prominent U.S. officials.
  • Bickert clarified Zuckerberg's position, saying that his comments did violate the company's policies and videos containing that content were all blocked, including the Steve Bannon-branded page that posted the videos, but that the video upload itself wouldn't ban Bannon from the platform.

Go deeper

Nov 19, 2020 - Technology

Facebook removed 265,000 pieces of content on voter interference

Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)

Facebook says it removed more than 265,000 pieces of content from Facebook and Instagram in the U.S. for violating its content policies on voter interference leading up to the election.

Why it matters: The company was much more proactive this election cycle than last in taking down and labeling content attempting to disrupt the election.

America's Chinese communities struggle with online disinformation

Illustration: Annelise Capossela/Axios

Disinformation has proliferated on Chinese-language websites and platforms like WeChat that are popular with Chinese speakers in the U.S., just as it has on English-language websites.

Why it matters: There are fewer fact-checking sites and other sources of reliable information in Chinese, making it even harder to push back against disinformation.