Feb 22, 2019 - Health

Anti-vaccination content haunts Big Tech

Illustration of an old-school Mac with a flashing "no vaccination" sign

Illustration: Rebecca Zisser/Axios

Anti-vaccination content that has long appeared in search results and on social media is getting renewed attention after the U.S. government attributed recent measles outbreaks in part to reduced vaccination levels in some areas.

Why it matters: The renewed spotlight on the issue has prompted members of Congress to demand answers from platforms about how they intend to handle conspiracy theories that could impact public safety.

Driving the news: A slew of media reports about anti-vaccination videos thriving on social media services has sent companies scrambling to address them. Most recently, BuzzFeed published a report explaining how the problem is prevalent on YouTube. Health experts are calling on Facebook to manage anti-vaccination groups.

The problem has gotten even more attention as members of Congress begin to publicly address the issue, which has troubled platforms for years.

  • House Intelligence Committee Chairman Adam Schiff last week sent a letter to Google and Facebook requesting additional information on the steps they're currently taking to provide medically accurate information on vaccinations to users.

The big picture: Tech companies prefer not to serve as content arbiters and have long struggled to balance free-speech ideals with efforts to limit undesirable online interactions like hate speech, bullying and misinformation. But the issue is harder to duck when the spread of false information can lead to real-world harm.

  • For this reason, many companies are taking stronger action against false vaccine-related information than they have in other areas of dubious content.

Each company is addressing the problem differently. Some are removing search results for vaccinations altogether. Others are treating medically inaccurate content like a policy violation.

  • YouTube: The company announced last month it will begin reducing recommendations of borderline content and content that could misinform users in harmful ways, including videos promoting a phony miracle cure for a serious illness. "This includes certain types of anti-vaccination videos," the company says.
  • Google: When it comes to Search, Google says that for queries that pertain to sensitive topics susceptible to misinformation, like health information, it has systems in place to prioritize results from authoritative sources. To help with this process, Google displays knowledge panels at the top of search results for illnesses and conditions with information from authoritative sources and have been evaluated by medical professionals.
  • Facebook: Facebook says it has "taken steps to reduce the distribution of health-related misinformation on Facebook," but says it knows it has more to do. A spokesperson says it's currently working on additional changes that Facebook will be announcing soon. Facebook is considering changes like reducing or removing this type of content from recommendations and demoting it in search results.
  • Twitter: There's no specific policy in place at Twitter to cover anti-vaccination content, but the company argues that the dynamics of its platform mean that readers are more likely to encounter balanced information.
  • Pinterest: The company is currently blocking results for searches like "vaccine" or "vaccination" altogether, saying it doesn't want to lead users down a rabbit hole of potentially harmful advice. However, Axios searches for "vaxxer" or even "autism vaccine" still returned a slew of returns, suggesting it's still fairly easy for a user to be exposed to this content.

The bottom line: In taking action against anti-vaccination content, online platforms are accepting arguments made by health care professionals and policymakers that they should treat it more as an incitement to public harm, like shouting "fire" in a crowded theater, than as reasonable debate.

Go deeper