SaveSave story

Facebook grapples with balance of humans and technology for safety

Facebook's Antigone David, Guy Rosen and Axios' Kim Hart Photo: Sara Fischer

Facebook executives dodged saying how many more employees they're willing to hire to review nefarious content on its platform.

Why it matters: It's clear that human content reviewers are needed to address some of the biggest safety and security concerns on the platform, like suicide warnings and egregious content. Facebook has committed to doubling the number of staff monitoring content but it's not clear if that is a viable long-term strategy given Facebook's more than 2 billion users.

“I think this is actually where there’s incredible opportunity through technology,” Facebook safety chief Antigone Davis said. “I think the opportunity for technology to fill some of that — you know how many reviewers — is, I would say boundless."

The details: In a conversation with Axios' Kim Hart, Davis and VP of Product Guy Rosen touted that "every piece of content" that triggers the company's suicide prevention efforts is reviewed by humans. But in other instances, they conceded that automating the review is really the only way to scale their safety efforts.

  • Rosen cited suicide prevention as a good example of how humans and machines can work together to mitigate risk. Rosen said they found success when the company started working on proactive detection through using technology to monitor comments.
  • In other instances, they described ways in which machines can be more effective than humans, like using automation to immediately route user complaints to moderators that speak the native language used in the complaints.

Asked if technology alone can ever get ahead of some of the issues discussed at the event, Rosen said that it is "certainly not a silver bullet," but that they are "building the right set of tools," to conquer the avalanche of unforeseen safety issues "step by step."

The conversation occurred Thursday at a Facebook Security Summit in Washington. The event was designed to offer clarity into their efforts around tech addiction, privacy and safety.

  • Facebook has been ramping up conversations arounds safety and security on its platform in response to instances of suicide and abuse over the year.
  • It recently conceded that its product may not be good for consumer health and may not be a net good for democracy.
Mike Allen 5 hours ago
SaveSave story

Why Trump added a streetfighter to his legal team

Screenshot via Fox News

A new addition to President Trump's legal team — Joe diGenova, a former U.S. attorney who is well-known in Washington and has argued for the president on Fox News — reflects three White House realities.

The state of play: (1) The White House is digging in for a fight that looks to be longer and messier than officials had expected. (2) This is another example of the president responding to televised cues. Trump has spent most of his adult life in litigation, and obsesses about legal positioning in the same way that he is consumed by his press coverage. (3) It's another pugilistic voice at the table, and suggests that this weekend's attacks on Mueller won't be the last.

SaveSave story

Facebook reaches a tipping point

Illustration: Rebecca Zisser/Axios 

Of all the news crises Facebook has faced during the past year, the Cambridge Analytica scandal is playing out to be the worst and most damaging.

Why it matters: It's not that the reports reveal anything particularly new about how Facebook's back end works — developers have understood the vulnerabilities of Facebook's interface for years. But stakeholders crucial to the company's success — as well as the public seem less willing to listen to its side of the story this time around.