
Facebook's "War Room." Photo: Noah Berger/AFP via Getty Images
Facebook spent 3 hours detailing its efforts to fight misinformation on Wednesday, highlighting points of improvement but leaving unanswered the overarching question of whether users are safer than they were 2 years ago.
The good: Facebook is getting better at both detecting and removing some types of content, with a particular focus on efforts to subvert democratic elections.
- The company said it removed 45,000 attempts at voter suppression during the 2018 U.S. midterms, including lies about when, where and how to vote, as well as misinformation about immigration enforcement at the polls, with 90% of those efforts being spotted before anyone reported them.
The bad: Other types of negative content remain prevalent on Facebook.
- The company is able to proactively detect and remove more than 99% of child exploitation and terrorist propaganda before users report it, as well as 96% of nudity and 97% of graphic violence — but barely more than half of hate speech (52%).
The ugly: Facebook's pledge to shift toward private, encrypted conversations is likely to make it harder for the company to monitor and remove objectionable content. Facebook executives acknowledged the issue Wednesday, but declined to offer any specifics on how the company will deal with it.
Between the lines: When it comes to false information, in most cases Facebook isn't looking to remove it, though it is working to keep such information from being viewed and shared as broadly.
- "We don’t want to make money from problematic content or recommend it to people," Facebook product manager Tessa Lyons told reporters.
Facebook faces a tough challenge as it looks to reduce the visibility of content that approaches, but doesn't violate, its standards.
- "As [a post] gets closer to the line, it gets more and more engagement," said Henry Silverman, operations specialist.
- That's why Facebook is looking to create a new gray zone of content that's promoted less but not removed.
- But as the company turns up that dial, it's cutting down on engagement — the metric its whole business is tuned to maximize.
What they're saying: Asked whether Facebook believes users are safer than in years' past, VP of integrity Guy Rosen told Axios that Facebook is doing better but stopped short of claiming users are safer.
"We're taking down more bad content, and taking down more of it before people even report it. We're proactively and methodically addressing abuse on the platform, understanding existing problems and identifying new ones as they emerge. So, I would say we're doing better than we were. But ... these are not problems you fix, but issues where you continually improve."— Guy Rosen
Meanwhile, Rosen also said at the event that Facebook is still several months from being able to deliver a Clear History tool it originally promised for last year as a means for users to increase their privacy. It now hopes to deliver it in the fall.
Flashback: It was one year ago today that CEO Mark Zuckerberg was defending Facebook on Capitol Hill.
Our thought bubble: Wednesday's event sounded like the online-platform equivalent of a military briefing. Facebook is in fact now engaged in a long-term war of attrition with some of its own users to shape the boundaries of acceptable speech on its platform. It has one big advantage: It owns the battleground and sets the rules of engagement.