Oct 17, 2019 - Technology

Human actors are changing the spread of disinformation

Illustration: Eniola Odetunde/Axios

Disinformation campaigns used to consist of trolls and bots orchestrated and manipulated to produce a desired result. Increasingly, though, these campaigns are able to find willing human participants to amplify their messages and even generate new ones on their own.

The big picture: It's as if they're switching from employees to volunteers — and from participants who are in on the game to those who actually believe the disinformational payload they're delivering.

Why it matters: Understanding this changing nature is critical to preparing for the next generation of information threats, including those facing the 2020 presidential campaign.

Speaking at Stanford University Tuesday, researcher Kate Starbird — a University of Washington professor who runs a lab that studies mass participation — traced the change across the stories of three different campaigns.

1. Russian interference in the 2016 election: Starbird's work started not with studying disinformation, but with an analysis of the debate that raged on Twitter over the Black Lives Matters movement.

  • It was only after Twitter released data on Russian propagandists in November 2017 when her team realized that some of the most prolific posters — on both sides of the debate — were fictional personas created by the Russians.
  • "In a few cases, we can see them arguing with themselves," said Starbird.

2. Syria's "White Helmets": In this case, an aid group known as the White Helmets working in Syria was attacked by online critics for a host of alleged atrocities.

  • Here Russia was actively involved in stirring the pot, but the posters themselves were neither bots nor trolls, but activists who adopted the issue as their own.
  • "These are real people who are sincere believers of the content they are sharing," Starbird said.
  • Russian media, including Sputnik and RT, made the movement appear significantly larger, though, by interviewing activists and giving them both a platform and a veneer of legitimacy.

3. Conspiracy theories tied to mass-casualty events: People are predisposed to find conspiracies in every tragedy, and conspiracy theories have accompanied all manner of mass-casualty events such as the Boston Marathon bombing and Sandy Hook shooting.

  • The theories crop up organically, though Russian or other disinformation promoters can and do help amplify the messages.
  • Terms like "false flag" and "crisis actors" are applied to the victims, flipping the script of whatever has transpired.
  • "It's almost like a self-sustaining community, but you can see it's been shaped by disinformation campaigns of the past," Starbird said.
  • All these factors, she said, makes these cases the "most frightening" she's studied.

Between the lines: Not all of the disinformation has come from Russia, Starbird said, but added: "They have been innovators in this space."

What's next: Starbird recommended a couple of actions for the tech companies.

  • First, she urged them to look at entire campaigns, rather than focusing on the veracity of individual posts. While Twitter and Facebook tend to look at posts in isolation, the creators of disinformation are focused on an overall campaign, a set of narratives with a larger point, she said.
  • Starbird also said tech companies should discount false claims of conservative bias that, she suggested, are being leveled by the disinformation's beneficiaries.

"The people that have benefited are now in power in a lot of places," she said. "Anything the companies do to take a chunk [of their power away] is going to be called bias."

Meanwhile: Many of the next disinformation threats may be domestic, notes former Facebook security chief Alex Stamos, who now teaches at Stanford. And those will be harder for law enforcement to investigate given that in many cases there is no law being broken.

Go deeper: Read more from Axios' Misinformation Age series

Go deeper

Misinformation about coronavirus is spreading fast

Illustration: Sarah Grillo/Axios

Misinformation about the coronavirus is testing governments, tech platforms and health officials — as well as a nervous public — in both the U.S. and China.

Why it matters: The new cycle of misinformation around the deadly disease is testing Big Tech platforms' ability to police rule-breaking content and China's ability to control domestic criticism.

Go deeperArrowJan 28, 2020

Tech's biggest upcoming battles in 2020

Illustration: Eniola Odetunde/Axios

The most consequential stories for tech in 2020 pit the industry's corporate colossi against the U.S. government, foreign nations, and the human needs of their own customers.

Why it matters: Today's tech giants own and operate the informational hubs that increasingly shape our public and private lives. That's putting their products and policies under greater scrutiny than ever before.

YouTube faces criticism over climate misinformation

Photo: Florian Gaertner/Getty Images

A report this month by the activist group Avaaz alleges YouTube is "driving millions of people to watch climate misinformation" daily.

What they found: One finding is that when users search for "global warming," 16% of the top 100 "related videos" in the "up next" feature had climate disinformation. Another is that major brands are often unaware that their ads run on these videos.

Go deeperArrowJan 29, 2020