Nov 27, 2018 - Technology

Misinformation bots are smarter than we thought

Illustration: Aïda Amer/Axios

Bots spreading misinformation are using more sophisticated techniques, like going after specific human influencers and targeting misleading information within the first few seconds of it being posted, according to new studies.

Why it matters: The studies suggest that bots are getting more adept at gaming social platforms, even as the platforms are making changes to weed them out. Bots are also getting better at avoiding detection.

After the 2016 election, researchers tried to understand the role bots played in spreading misinformation. What they found is that bots have been steps ahead of gaming the web for some time — in particular, social platforms like Twitter and Facebook — using a few key tactics.

1. Focus on speed: The spread of low-credibility content by social bots happens very quickly, according to a new study from Indiana University published in Nature Magazine.

  • The study suggests that bots amplify questionable content in the early spreading moments before it goes viral, like the first few seconds after an article is first published on Twitter.
  • "We conjecture that this early intervention exposes many users to low-credibility articles, increasing the chances than an article goes 'viral,'" it said.

2. Using specific targets: Bots increase exposure to negative and inflammatory content on social media in part by targeting specific people, according to a new study from the Proceedings of the National Academy of Sciences (PNAS).

  • They do this in part by targeting specific social influencers, who are more likely to engage with bots. This elevates the content more quickly than if it were to be exposed to everyday users or other bots, per the study's authors.
  • This is important because it suggests that bots are more strategic in who they target than previously thought.

3. Elevating human content: Bots aim to exploit human-generated content, because it is more prone to polarization, according to the PNAS study.

  • "They promote human-generated content from (social) hubs, rather than automated tweets, and target significant fractions of human users," the report said.
  • This helps social bots accentuate the exposure of opposing parties to negative content, which can exacerbate social conflict online.

4. Targeting original posts, not replies: Bots spread low-credibility content that is created through an initial tweet or posting, according to the Nature study.

  • "Most articles by low-credibility sources spread through original tweets and retweets, while few are shared in replies," per the study. "This is different from articles by fact-checking sources, which are shared mainly via retweets but also replies."

5. Gaming metadata: Bots are using more metadata to mimic human behavior and thus avoid detection, according to a new study from Data & Society, which receives funding from Microsoft. As platforms get better at detecting inauthentic activity, bots are using metadata — photo captions, followers, comments, etc. — to make their posts seem more human-like.

  • According to the report, bots must mimic authentic human engagement, not just the way that humans post.

The bigger picture: Most Americans say they can't distinguish bots from humans on social media, according to a recent Pew Research Center survey.

  • About half of those who have heard about bots (47%) are very or somewhat confident they can recognize these accounts on social media, with just 7% saying they are very confident.
  • By comparison, 84% of Americans expressed confidence in their ability to recognize made-up news in a study from just two years ago, just after the presidential election.

Social platforms have been trying to reduce the content-elevating signals that are easily gamed by bots. Twitter, for example, has made follower counts appear less prominent on its iOS app by making the font size smaller in a new redesign effort, per The Verge.

What's next? The best way to tackle the problem at scale is by identifying the source of inauthentic behavior, says Joshua Geltzer, executive director of Georgetown University's Institute for Constitutional Advocacy and Protection.

"Although it's improved over the past two years, there needs to be an even better collaboration between the government and the private sector about detection of bad activity in the early stages. While the government doesn't normally share this type of information with the private sector, they should be doing so in order for platforms to act on it and vice versa."
— Joshua Geltzer

Go deeper:

Go deeper