Tech platforms struggle to police deepfakes
Facebook, TikTok and Reddit all updated their policies on misinformation this week, suggesting that tech platforms are feeling increased pressure to stop manipulation attempts ahead of the 2020 elections.
Why it matters: This is the first time that several social media giants are taking a hard line specifically on banning deepfake content — typically video or audio that's manipulated using artificial intelligence (AI) or machine learning to intentionally deceive users.
Driving the news:
- Deepfakes are videos (or audio) edited using AI and machine learning.
- Facebook said Monday that it would ban deepfakes that were edited beyond adjustments for clarity and quality or were edited to misrepresent someone's statements.
- TikTok said Wednesday that it will ban misinformation that's created to cause harm to users or the larger public, including misinformation about elections or other civic processes, and manipulated content meant to cause harm. TikTok's policies do not explicitly address or define deepfakes, but they address manipulated content in much more depth than their previous standards.
- Reddit said Thursday that it would ban accounts that impersonate individuals or entities in a misleading or deceptive manner.. It will also ban deepfakes or other manipulated content that's "presented to mislead, or falsely attributed to an individual or entity."
The big picture: Concern around deepfakes began to surface after the 2016 election and has since become a popular talking point in accounts of our tech-fueled slide to dystopia.
- Yes, but: To-date, there have been few instances of true deepfakes going viral to mislead users. Rather, most misleading media that goes viral online take the form of amateur doctored images that don't use sophisticated technology but rather mislead by offering deceptive context.
- Case-in-point: Hazel Baker, Reuters' head of user-generated content news-gathering, told Axios last month that "Ninety percent of manipulated media we see online is real video taken out of context used to feed a different narrative."
Between the lines: The best example of confusion around whether a post was a deepfake and should be removed occurred last year, when a doctored video of Nancy Pelosi that was slowed to make her appear drunk went viral online.
- Be smart: Facebook's new deepfake policies wouldn't necessarily ban that video, because it wasn't created using AI or machine learning. Reddit's new policies would, if the clip was posted with the intention to mislead users about the truth.
Our thought bubble: One of the biggest steps social media companies have made in taking action on deepfakes is objectively defining what they are. Deciding when to remove them remains difficult.
- For now, the companies are trying to use intent as their barometer. But intent is highly subjective, and making those calls at tech-platform scale is going to prove a challenge.