Updated Mar 9, 2020 - Technology

Twitter labels Biden clip retweeted by Trump as "manipulated media"

Photo: Jakub Porzycki/NurPhoto via Getty Images

Twitter has placed a "manipulated media" label on an edited video of 2020 candidate Joe Biden delivering a speech. The video was originally tweeted by White House social media director Dan Scavino and retweeted by President Trump.

Why it matters: This appears to be the first time Twitter has used that label to call out a visual that it considers to have been doctored with the intention of manipulating users.

Details: The tweet was labeled as "manipulated media" based on Twitter's Synthetic and Manipulated Media policy, which states that "you may not deceptively share synthetic or manipulated media that are likely to cause harm."

  • For now, the label only shows up when it is seen in users' Twitter feeds, not when the tweet is clicked on directly. According to a spokesperson, Twitter is working on a fix.

The tweet itself featured a video of Biden delivering a speech that's clipped to show him saying "We can only re-elect Donald Trump." It doesn't include the rest of the former vice president's sentence from the speech in which he says, "We can only re-elect Donald Trump, if in fact we get engaged in this circular firing squad here."

The big picture: Tech companies like Twitter and Facebook have struggled with ways to fact-check and police misinformation on their platforms without appearing biased.

  • Twitter's new policy went into effect on March 5. Twitter has previously said that a doctored video posted last month by former Democratic presidential candidate Mike Bloomberg would likely be labeled as false when its new manipulated media policy eventually went into effect.
  • A Facebook spokesperson said the doctored Biden video would not violate that social media platform's manipulated media policy.

What they're saying: In a statement responding to Twitter's actions, Biden's campaign manager Greg Schultz slammed Facebook's policies around misinformation, calling them "repugnant."

  • The clip has since been labeled as "Partly False Information" on Facebook.

The other side: Conservative personalities defended Scavino, saying the video wasn't manipulated, only slightly edited.

  • Some asserted that Twitter has in the past not labeled videos as being manipulative when other campaigns have edited them selectively, although the company's policy would not have been applied to any video before March 5.

Our thought bubble: Often when a platform labels something as manipulated or false, its label is weaponized or slammed for being used in a biased manner.

  • For example, Facebook said in 2017 it would no longer use "Disputed Flags" — red flags next to fake news articles — to identify fake news for users, because academic research showed they didn't work and they often have the reverse effect of making people want to click more.

Go deeper: Tech platforms struggle to police deepfakes

Editor's note: This article has been updated with comment from Schultz and the video's new status on Facebook.

Go deeper

Why labeling misinformation on social media can be so tricky

Illustration: Rebecca Zisser/Axios

Tech companies like Twitter and Facebook have struggled with ways label misinformation without appearing biased or without baiting users to game the system.

Why it matters: It may seem obvious that tech companies should let users know when something is false, but sometimes, calling out false content can have unintended consequences.

Twitter cracks down on coronavirus misinformation from Giuliani, Bolsonaro

Photo: Saul Martinez/Getty Images

It's notable that Twitter, like other social networks, has announced stricter rules on virus-related misinformation than other types of false posts. Even more notable, though, is that Twitter has actually enforced its rules against prominent accounts in recent days.

Why it matters: Twitter has been criticized for being lax to enforce its rules, particularly against well-known politicians and celebrities.

Facebook to allow users to access data generated from engagement

Photo: Thomas Trutschel/Photothek via Getty Images

Facebook said Monday it's updating its data privacy tools to include additional information about what content users interact with on Facebook and the machine learning data created from their engagement, which the company uses to infer what else they may like.

Why it matters: Facebook wants to ensure it's getting ahead of any privacy regulations, with GDPR now long in effect, and before the new California Consumer Privacy Act (CCPA), which went into effect Jan. 1, starts being officially enforced on July 1.