As misinformation rampages through the internet, the clearest way to make sense of its scale and impact may be to use a similar lens to that used for detecting targeted influence campaigns, experts tell Axios' Kyle Daly, which means evaluating who's most active in spreading it and why.
Why it matters: Widespread misinformation is endangering public health and faith in democracy. Any hope of containing it relies on greater visibility into exactly how misinformation spreads across the internet.
The big picture: It's exceedingly tough to quantify misinformation. But we're not entirely in the dark and have some simple but useful tools — among them:
- Google Trends measures the total volume of searches for a given term and can capture the tipping point when false narratives break out into the mainstream.
- NewsWhip gauges the attention particular topics are receiving by measuring the social media interactions — likes and shares on Facebook and Twitter, for example — that news stories and other links about them garner. (It's how we built our chart above.)
Yes, but: Such methods provide just a small part of the picture — and nothing about whether the people clicking on misinformation are actually buying it. Experts Kyle talked with point to several big problems with existing methods of quantifying misinformation.
1. The numbers that are available are incomplete and potentially misleading.
- Twitter and Facebook have offered snapshots of how much material they've taken down around certain topics, but not the total volume of material they're reviewing.
- Facebook has spoken about measuring the overall "prevalence" of content that violates its rules by using sampling techniques. But observers aren't sold on relying on the platforms' own assessments.
2. The public internet is only one stream in the broader misinformation deluge.
- False claims and conspiracy theories are increasingly being spread in private Facebook groups, private chat servers on platforms like Discord, and private texts and messaging groups.
- They also surface in partisan media outlets, elected officials' public statements and everyday real-world conversation.
3. "Misinformation" is a subjective category.
- Something like "5G towers spread COVID-19" is an easily adjudicated false claim.
- But most misinformation appears in shades of gray, coming as a misleading gloss on events or statistics with some basis in reality.
- And the language of misinformation is often innuendo and obfuscation — vague allusions to conspiracies and malfeasances rather than bald-faced lies.
What's next: Companies and research groups that track misinformation are increasingly focused on the actors who are most effective in driving discussion around certain topics — and on those actors' agendas. A better understanding of who's giving voice to a particular claim can serve as a shortcut for individuals to judge its merits without relying on platform enforcement or transparency.
- Yonder and Graphika are among the companies making sense of misinformation's spread not by trying to run down every questionable claim, but by analyzing and defining the groups and figures that are either most active in discussing topics like mail-in ballots or responsible for shuttling such discussions from platform to platform.
Of note: These approaches are effective and appealing because they don't require a full picture of every single questionable claim or conspiracy theory that travels across the internet. But researchers and policymakers still contend that we'd benefit from more transparency and accountability from platforms on steps they're taking to fight misinformation.
Something as simple as a government-mandated algorithmic impact assessment could force platforms toward a better understanding of the effect the decisions they farm out to AI have on civic health, argues Nate Erskine-Smith, a Canadian parliamentarian and member of the International Grand Committee on Disinformation.