Engagement with untrustworthy Facebook content doubles since 2016
- Ashley Gold, author of Axios Pro: Tech Policy

Illustration: Sarah Grillo/Axios
Facebook users engage with content from untrustworthy outlets twice as often today as they did at the time of the 2016 U.S. election, researchers at the German Marshall Fund found, despite the many measures the platform and its competitors have rolled out to combat the spread of misinformation.
How it works: The German Marshall Fund, along with nonpartisan news reliability service NewsGuard and social media intelligence firm NewsWhip, measured the spread of articles from deceptive sites across Facebook in the U.S.-focused study.
- The researchers broke sites down into two categories — "false content producers" publishing outright false information — and "manipulators," publishing claims that are distorted without reaching the threshold of being provably false.
- User interactions with both kinds of outlets on Facebook tripled from the third quarter of 2016 to the third quarter of 2020.
- "Manipulators" have grown more popular than "false content producers" on Facebook, seeing interactions increase by 293%.
- Breitbart and The Blaze are two sites the researchers define as "manipulators." The Federalist and DJHJ Media are considered "false content producers."
- Some left-wing sites also fell into these categories but saw less engagement, the researchers note.
What they're saying: "We expected the really egregious ones, that repeatedly publish verifiably false information, that they would've declined, and the other kind, that exploits the loopholes of the platforms and are sort of misleading, would have increased," the German Marshall Fund's Karen Kornbluh told Axios. "What we found is that they both increased."
- "It's a difficult problem to solve, especially when it is domestic information, but what it shows is that the whack-a-mole approach of looking at individual pieces of content is a losing game," she said.
- Kornbluh said platforms should think more about what they choose to allow to be amplified in addition to what they decide to ban.
Why it matters: Social media companies are stuck combatting bad actors online on a case-by-case basis as more non-trustworthy outlets thrive with increased user engagement. For Facebook, that problem has only gotten harder since 2016.
The other side: Facebook spokesman Andy Stone argued that engagement is not the right metric to capture the experience of most users on the company's platform.
- "Using [engagement] to draw conclusions about the progress we’ve made in limiting misinformation and promoting authoritative sources of information since 2016 is misleading," Stone told Axios. "Over the past four years we've built the largest fact-checking network of any platform, made investments in highlighting original, informative reporting, and changed our products to ensure fewer people see false information and are made aware of it when they do."
Go deeper:
- Exclusive: False fire rumors keep spreading on Facebook despite ban
- Domestic online meddling threatens 2020 election
- Facebook removes inauthentic campaign linked to pro-Trump group
- 2020 election influence operations target journalists
- Facebook, Instagram attach "false information" stamp to Tucker Carlson coronavirus clip