Sign up for our daily briefing

Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Twin Cities

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa Bay news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa Bay

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Charlotte news in your inbox

Catch up on the most important stories affecting your hometown with Axios Charlotte

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Aïda Amer/Axios

Bots spreading misinformation are using more sophisticated techniques, like going after specific human influencers and targeting misleading information within the first few seconds of it being posted, according to new studies.

Why it matters: The studies suggest that bots are getting more adept at gaming social platforms, even as the platforms are making changes to weed them out. Bots are also getting better at avoiding detection.

After the 2016 election, researchers tried to understand the role bots played in spreading misinformation. What they found is that bots have been steps ahead of gaming the web for some time — in particular, social platforms like Twitter and Facebook — using a few key tactics.

1. Focus on speed: The spread of low-credibility content by social bots happens very quickly, according to a new study from Indiana University published in Nature Magazine.

  • The study suggests that bots amplify questionable content in the early spreading moments before it goes viral, like the first few seconds after an article is first published on Twitter.
  • "We conjecture that this early intervention exposes many users to low-credibility articles, increasing the chances than an article goes 'viral,'" it said.

2. Using specific targets: Bots increase exposure to negative and inflammatory content on social media in part by targeting specific people, according to a new study from the Proceedings of the National Academy of Sciences (PNAS).

  • They do this in part by targeting specific social influencers, who are more likely to engage with bots. This elevates the content more quickly than if it were to be exposed to everyday users or other bots, per the study's authors.
  • This is important because it suggests that bots are more strategic in who they target than previously thought.

3. Elevating human content: Bots aim to exploit human-generated content, because it is more prone to polarization, according to the PNAS study.

  • "They promote human-generated content from (social) hubs, rather than automated tweets, and target significant fractions of human users," the report said.
  • This helps social bots accentuate the exposure of opposing parties to negative content, which can exacerbate social conflict online.

4. Targeting original posts, not replies: Bots spread low-credibility content that is created through an initial tweet or posting, according to the Nature study.

  • "Most articles by low-credibility sources spread through original tweets and retweets, while few are shared in replies," per the study. "This is different from articles by fact-checking sources, which are shared mainly via retweets but also replies."

5. Gaming metadata: Bots are using more metadata to mimic human behavior and thus avoid detection, according to a new study from Data & Society, which receives funding from Microsoft. As platforms get better at detecting inauthentic activity, bots are using metadata — photo captions, followers, comments, etc. — to make their posts seem more human-like.

  • According to the report, bots must mimic authentic human engagement, not just the way that humans post.

The bigger picture: Most Americans say they can't distinguish bots from humans on social media, according to a recent Pew Research Center survey.

  • About half of those who have heard about bots (47%) are very or somewhat confident they can recognize these accounts on social media, with just 7% saying they are very confident.
  • By comparison, 84% of Americans expressed confidence in their ability to recognize made-up news in a study from just two years ago, just after the presidential election.

Social platforms have been trying to reduce the content-elevating signals that are easily gamed by bots. Twitter, for example, has made follower counts appear less prominent on its iOS app by making the font size smaller in a new redesign effort, per The Verge.

What's next? The best way to tackle the problem at scale is by identifying the source of inauthentic behavior, says Joshua Geltzer, executive director of Georgetown University's Institute for Constitutional Advocacy and Protection.

"Although it's improved over the past two years, there needs to be an even better collaboration between the government and the private sector about detection of bad activity in the early stages. While the government doesn't normally share this type of information with the private sector, they should be doing so in order for platforms to act on it and vice versa."
— Joshua Geltzer

Go deeper:

Go deeper

The new Washington

Illustration: Sarah Grillo/Axios

The Axios subject-matter experts brief you on the incoming administration's plans and team.

Rep. Lou Correa tests positive for COVID-19

Lou Correa. Photo: Tom Williams/CQ-Roll Call, Inc via Getty Images

Rep. Lou Correa (D-Calif.) announced on Saturday that he has tested positive for the coronavirus.

Why it matters: Correa is the latest Democratic lawmaker to share his positive test results after last week's deadly Capitol riot. Correa did not shelter in the designated safe zone with his congressional colleagues during the siege, per a spokesperson, instead staying outside to help Capitol Police.

Far-right figure "Baked Alaska" arrested for involvement in Capitol siege

Photo: Shay Horse/NurPhoto via Getty Images

The FBI arrested far-right media figure Tim Gionet, known as "Baked Alaska," on Saturday for his involvement in last week's Capitol riot, according to a statement of facts filed in the U.S. District Court in the District of Columbia.

The state of play: Gionet was arrested in Houston on charges related to disorderly or disruptive conduct on the Capitol grounds or in any of the Capitol buildings with the intent to impede, disrupt, or disturb the orderly conduct of a session, per AP.