Jan 8, 2020

Democrats unimpressed with Facebook's new deepfake policy

Monika Bickert, head of global policy management at Facebook, testifies during a Senate Commerce Committee hearing in September 2019. Photo: Mark Wilson/Getty Images

Lawmakers questioned Facebook's new deepfake policy at a hearing Wednesday, with Democrats arguing the social media company's plan for addressing manipulated video does not go far enough.

Why it matters: Many policymakers already say tech giants have proven they're not up to the task of regulating themselves. Dissatisfaction with Facebook's plans for handling deepfakes will only further fuel calls for Washington to step in.

Context: Facebook unveiled the deepfake policy this week after criticism about how the company handles altered videos. Top of mind for Democrats was Facebook's decision last year to not remove a doctored video of House Speaker Nancy Pelosi that made her appear drunk.

Driving the news: Rep. Jan Schakowsky, chairperson of the Consumer Protection and Commerce subcommittee, said the new policy appears "wholly inadequate" because it would not have prevented the Pelosi video.

  • Monika Bickert, Facebook's vice president of global policy management, confirmed the video would not have fallen under the new policy, "but it would still be subject to our other policies that address misinformation."

Details: Facebook's new standard is to remove videos that are altered using artificial intelligence and would mislead the average person into thinking the subject of the video said something they did not.

  • Schakowsky noted that this wouldn't cover videos where just the image is altered. "I really don’t understand why Facebook should treat take fake audio differently from fake images. Both can be highly misleading and result in significant harm to individuals."
  • Democratic Florida Rep. Darren Soto also pressed Bickert on why Facebook wouldn't take down the Pelosi video. "Our approach is to give people more information so that if something is going to be in the public discourse, they will know how to assess it," Bickert said.
  • She did acknowledge that the company could've gotten the video to fact-checkers faster and put a clearer label on the video to identify it as false.

What to watch: Schakowsky seemed at least somewhat receptive to a call from Tristan Harris, a former Google employee who co-founded the Center for Humane Technology, to give federal agencies such as the Departments of Education and Health and Human Services a "digital update" to expand their jurisdictions to tech platforms.

  • Harris suggested agencies like HHS could "audit Facebook on a quarterly basis and, say, tell us how many users are addicted between [certain] ages" and press the company on what it's doing "next quarter to make adjustments to reduce that number."
  • Schakowsky said she hopes to get bipartisan conversations moving on building a new regulatory framework for tech — one, she said, that could include the kinds of audits that Harris floated.
  • "It would not necessarily create new regulatory laws ... but we may need to," she said. "To me, that's the big takeaway today."

Go deeper

Facebook's rising Democrat problem

Illustration: Sarah Grillo/Axios

One of Facebook's biggest headaches leading up to 2020 isn't election interference or fake news — it's worrying about what a Democrat in the White House could mean for the business.

Why it matters: The Obama administration's warm embrace of Big Tech is no longer shared by many Democratic policymakers and presidential hopefuls. Many of them hold Facebook responsible for President Trump's 2016 victory, assail it for allowing misinformation to spread, and have vowed to regulate it or break it up.

Go deeperArrowJan 23, 2020

Tech platforms struggle to police deepfakes

Illustration: Aïda Amer/Axios

Facebook, TikTok and Reddit all updated their policies on misinformation this week, suggesting that tech platforms are feeling increased pressure to stop manipulation attempts ahead of the 2020 elections.

Why it matters: This is the first time that several social media giants are taking a hard line specifically on banning deepfake content — typically video or audio that's manipulated using artificial intelligence (AI) or machine learning to intentionally deceive users.

Facebook's decade of unstoppable growth

Despite an onslaught of scrutiny and scandal over the past few years, Facebook closed out the second decade of the millennium stronger than ever.

The big picture: The tech giant brought in nearly $70 billion in revenue for 2019, up more than 25% from the year prior and up more than 1300% from 2012, the year it went public.

Go deeperArrowJan 30, 2020