Fake news spreads faster and further— and we're to blame
False news spreads faster than true stories, and it's because of humans, not bots, according to a new study published today in Science. Our preference for novel news, which is often false, may be driving our behavior, researchers from MIT report.
The bottom line: "It's important to avoid temptation to shift the blame elsewhere and focus on these non- human and foreign actors. Even if we solve bots and the foreign interference problem, it wouldn’t solve the problem of online misinformation," says Brendan Nyhan, a political scientist at Dartmouth College, who wasn't involved in the study.
Details from the study, one of the first large scientific ones to actually analyze false news:
- Researchers looked at how roughly 2500 contested news stories — determined to be true or false by six fact-checking sites, including Politifact and Snopes — spread on Twitter between 2006 and 2017 via 3 million people and more than 4.5 million tweets. They found false stories — especially political ones — traveled faster, farther and deeper into the network than the true kind. (True stories took six times as long as false ones to reach 1500 people.) And, false stories were 70% more likely to be retweeted than the truth.
- They examined users' timelines over 60 days and found they were more likely to tweet information they haven't heard before. And, this novel information was more likely to be false than true.
- They then went back and, using a bot detection algorithm, removed bots from the analysis. Surprisingly, MIT's Soroush Vosoughi says, bots weren't the reason for the difference between true and false news — they spread them equally.
- Most news traveling through Twitter isn't contested so we don't know what it looks like for news that didn't make it into those fact-checking organizations, according to an author of the study, MIT's Deb Roy.
- The researchers suspect the trend carries over to other platforms but they don't have the data. Roy says:
"It raises interesting questions but provides no answers for what happens on other platforms like Facebook, Snapchat and other social media platforms but good old-fashioned things like email can also be used to share news. There are various places where news and information spread that we can’t say anything about."
What's next: There need to be ways to intervene and dampen the spread of misinformation, Vosoughi says. Roy says he is skeptical of media literacy training as a way to address the issue but Vosoughi is optimistic that it could help people to pause before they share something.
Another possibility would be to come up with an indicator of how much a person is contributing to a "healthy" discourse on a platform. Twitter recently announced it will work with independent researchers to determine what a healthy social network looks like. (The social network provided data and funding to the MIT team for this study but it was independently conducted.)
In an accompanying article, Nyhan and other researchers say it should be possible to tweak platforms' business models to better balance quality information with the monetary incentive for attention. He says Twitter should be commended for making data available for this research and urges other platforms to follow suit.