Aug 6, 2019

Misinformation haunts 2020 primaries

Illustration: Sarah Grillo/Axios

Despite broad efforts to crack down on misinformation ahead of the 2020 election, the primary season so far has been chock full of deceptive messages and misleading information.

Why it matters: More sophisticated tactics that have emerged since 2016 threaten to derail the democratic process by further polluting online debate. And the seemingly unending influx of fakery could plant enough suspicion and cynicism to throw an otherwise legitimate election into question.

The big picture: Social media platforms, which host the greatest volume of misinformation, have gotten wise to basic techniques used in previous elections, and now regularly take down swaths of accounts they say are fake or meddlesome.

  • In response, trolls both foreign and domestic have developed new attacks.
  • But plenty of simple bots that appear to be foreign controlled still slip through the companies' automated sieves, experts say, further endangering the already-precarious coming elections.

And the playing field has grown. "Far more people have gotten the idea that you can throw a U.S. election by trolling," says Ben Nimmo, a misinformation expert at the Atlantic Council.

Driving the news: Kamala Harris and Joe Biden were the most frequent targets of misinformation during and immediately after the most recent Democratic debates, according to a new report from VineSight, a company tracking Twitter activity.

  • Last week, the Wall Street Journal reported that bot-like activity pushed racially divisive content, especially about Harris, during the Democratic debates, citing data from social intelligence company Storyful.

Some of the most important shifts and tactics:

1. Smarter bots: Bad actors are relying less on phalanxes of bots known as botnets, instead creating convincing fakes to manipulate humans into doing the dirty work for them.

  • Bots today are more likely to mimic humans by hacking real accounts, aping human behavior or targeting people with lots of followers who can easily disseminate false or misleading information.
  • A lot of this happens on Twitter, because that's where journalists, experts and politicos hang out. "The impact of coordinated campaigns and bots on Twitter is first and foremost to set the news agenda," says Matthew Hindman, a professor at George Washington University. "Setting the agenda is hugely powerful."

2. Audience building: Rather than churn out short-lived fake accounts that spread misinformation but are quickly shut down, sophisticated players build pages and accounts that post engaging non-political content just to build a following.

  • A large group of trusting followers is more likely to spread a well-timed meme or political message snuck in between anodyne posts.

3. Shift from foreign to domestic: Influence from overseas, particularly Russia, has remained a central concern for government watchdogs — but misinformation is coming from other countries and inside the U.S., too.

  • False rumors about Mayor Pete Buttigieg committing sexual assault, for example, were created by two American white nationalists. And last week, Yahoo News obtained an FBI document warning that conspiracy theories are a new domestic terrorism threat.
  • Homegrown players range from troublemakers on internet message boards to high-profile consultants.

4. Shift in focus to obscure platforms: Facebook and Twitter have sucked up most of the attention since 2016, but fringe sites like 4chan and 8chan, plus niche blogs and pages, are breeding grounds for misinformation, and largely outside the public eye.

5. Targeting individual influencers: The rumormongers' holy grail is to get a mainstream journalist or celebrity to amplify misinformation. Tailored messages over Twitter DMs or emails can help win their trust, Nimmo says.

6. Distorting candidates' backgrounds: Newer candidates, still relatively unknown to the public, are having their pasts picked apart and misrepresented — a new spin on the racist "birther" attacks on President Obama's background.

  • Harris has been a particular target for misinformation.
  • Rep. Beto O'Rourke weathered false claims that he left racist language on an answering machine in the 1990s, per Politico.

7. Shift in focus on mainstream media: Even traditional outlets, with their large followings, have been caught spreading misinformation. Fox News hosts have recently been accused of peddling conspiracy theories about Joe Biden's health. And most major outlets, including The New York Times, have cited Russian troll accounts in news and opinion pieces, according to a study from UW Madison.

8. Deepfakes: The potential for a manipulated videos to create chaos for voters became clear after an edited clip of Nancy Pelosi went viral earlier this year. That wasn't a deepfake — those sophisticated AI-manipulated videos haven't shown up in the U.S. political sphere yet, but experts worry they will soon. Most campaigns, however, are largely unprepared for the threat.

But, but, but: Despite the increasingly sophisticated tactics, some of the kludgy methods used in past election campaigns persist undetected.

  • "There's still an enormous amount of very crude obviously fake accounts on pretty much every platform," says Hindman.

Go deeper

Autocracies rely on social media as a potent propaganda weapon

Illustration: Aïda Amer/Axios

Twitter and Facebook announced Monday the takedown of coordinated misinformation campaigns from the Chinese government, the latest in a list of global regimes caught using social media to exploit their own people, spread propaganda or retain power.

Why it matters: While mostly Western leaders around the globe push to hold social media companies accountable for large-scale misinformation campaigns, autocratic regimes have become increasingly reliant on social media technologies.

Go deeperArrowAug 20, 2019

Instagram adds tools for users to flag misinformation

Instagram logo. Photo: Alvin Chan/SOPA Images/LightRocket via Getty Images

Instagram is adding new tools for users to be able to report when they see something false posted, according to a company spokesperson.

Why it matters: These updates are a part of a bigger investment by Instagram to reduce the spread of misinformation on the platform, which is reportedly a hotbed for conspiracy theories and fake news, ahead of upcoming elections.

Go deeperArrowAug 15, 2019

Twitter bans advertising from "state-controlled news media entities"

Tens of thousands take to the streets of Hong Kong in a rally in Victoria Park, Aug, 18. Photo: Vernon Yuen/NurPhoto via Getty Images

Twitter announced Monday that it would no longer accept advertising from "state-controlled news media entities" after finding that more than 900 accounts originating from inside China have been part of a coordinated effort to undermine political protests in Hong Kong.

The big picture: Hong Kong saw its 11th straight week of pro-democracy protests over the weekend as the city pushes back on what it views as encroachment by the Chinese government on its autonomy. The accounts, which Twitter said were part of a "coordinated state-backed operation," sought to delegitimize the protest movement.

Go deeperArrowUpdated Aug 19, 2019