Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Rebecca Zisser/Axios
Tech companies like Twitter and Facebook have struggled with ways label misinformation without appearing biased or without baiting users to game the system.
Why it matters: It may seem obvious that tech companies should let users know when something is false, but sometimes, calling out false content can have unintended consequences.
Driving the news: Twitter on Sunday placed a "manipulated media" label on an edited video of 2020 candidate Joe Biden delivering a speech. It appeared to be the first time Twitter used that label to call out a visual that it considers to have been doctored with the intention of manipulating users.
- The video was originally tweeted by White House social media director Dan Scavino and retweeted by President Trump.
- Conservative personalities immediately jumped to Scavino's defense, saying the video wasn't manipulated, only slightly edited.
- Some asserted that Twitter has in the past not labeled videos as being manipulative when other campaigns have edited them selectively, although the company's policy wouldn't have been applied to any video before March 5.
Our thought bubble: Often when a platform labels something as manipulated or false, its label is weaponized or slammed for being used in a biased manner.
- For example, Facebook said in 2017 it would no longer use "Disputed Flags" — red flags next to fake news articles — to identify fake news for users, because academic research showed they didn't work and they often have the reverse effect of making people want to click more.