Jan 7, 2020

Facebook changing deepfake policies

Illustration: Sarah Grillo/Axios

Facebook is tightening its policies on "manipulated media," including deepfakes, Monika Bickert, the company's vice president of global policy management, says in a blog post.

Why it matters: Facebook has been criticized for the way it enforces its policies on deepfakes (AI-generated audio and video) and other misleading media. In particular, critics took aim at the tech giant's decision to allow a doctored video of House Speaker Nancy Pelosi to remain on its platform last year.

What's new: The new standards call for manipulated videos to be removed from the platform if they meet the following criteria:

  1. If video is edited or synthesized beyond adjustments for clarity and quality in ways that are not obvious to an actual person and would likely mislead someone into thinking that a subject of the video said something that they did not actually say.
  2. If the video is altered via artificial intelligence or machine learning, not just Photoshop or another standard video-editing program, in a way that makes it appear to be authentic.

Between the lines: The policy updates don't extend to content that is parody or satire, Facebook says. Nor does it extend to video that's been edited "solely to omit or change the order of words."

Yes, but: Even if a video doesn't meet this new criteria for removal, it can still be taken down if it violates any of Facebook's other Community Standards covering issues like graphic violence, nudity or hate speech.

  • Similarly, any type of media that is identified as being a part of a coordinated inauthentic campaign will be taken down — even if the video doesn't violate the deepfake policy.

What's next: Videos that don’t meet the new standards for removal are still eligible for review by Facebook's third-party fact-checkers, the company says.

The bottom line: Facebook says that it doesn't want to remove all manipulated videos flagged by fact-checkers as false because those videos will be available elsewhere on the internet regardless. Rather, it thinks the better policy is to leave them up and label those videos as false — giving users context that they may not get elsewhere.

Go deeper

Democrats unimpressed with Facebook's new deepfake policy

Monika Bickert, head of global policy management at Facebook, testifies during a Senate Commerce Committee hearing in September 2019. Photo: Mark Wilson/Getty Images

Lawmakers questioned Facebook's new deepfake policy at a hearing Wednesday, with Democrats arguing the social media company's plan for addressing manipulated video does not go far enough.

Why it matters: Many policymakers already say tech giants have proven they're not up to the task of regulating themselves. Dissatisfaction with Facebook's plans for handling deepfakes will only further fuel calls for Washington to step in.

Go deeperArrowJan 8, 2020

Tech platforms struggle to police deepfakes

Illustration: Aïda Amer/Axios

Facebook, TikTok and Reddit all updated their policies on misinformation this week, suggesting that tech platforms are feeling increased pressure to stop manipulation attempts ahead of the 2020 elections.

Why it matters: This is the first time that several social media giants are taking a hard line specifically on banning deepfake content — typically video or audio that's manipulated using artificial intelligence (AI) or machine learning to intentionally deceive users.

Twitter sets high bar for taking down deepfakes

Photo illustration: Omar Marques/SOPA Images/LightRocket via Getty Images

Twitter on Tuesday announced a new policy aimed at discouraging the spread of deepfakes and other manipulated media, but the service will only ban content that threatens people's safety, rights or privacy.

Why it matters: Tech platforms are under pressure to stanch the flow of political misinformation, including faked videos and imagery. Twitter's approach, which covers a wide range of material but sets narrow criteria for deletion, is unlikely to satisfy critics or politicians like Joe Biden and Nancy Pelosi — who have both slammed platforms for allowing manipulated videos of them to spread.