Updated Aug 11, 2018 - Technology

Social media's new job: content cop

Animation of a piece of content being removed from a web page and crumpled up like paper
Illustration: Rebecca Zisser/Axios

A pileup of controversies over how Facebook, Twitter, YouTube and Microsoft moderate content on their sites is highlighting how thoroughly major tech companies have become arbiters of speech.

Why it matters: This isn't a job Silicon Valley wants — these companies have long argued the value of freewheeling, unsupervised, boundary-stretching online discourse. But it's the new normal in a media world where power to publish and unpublish now sits with a few companies that aren't prepared for that role.

In just the last 10 days, Facebook, Twitter, YouTube, Microsoft and other tech giants have, separately or together:

Users expect Facebook and the other platforms to wear many new hats:

  • generals in a war on fake news,
  • judges in cases of inflammatory speech,
  • regulators of potentially harmful information and disinformation,
  • and peacekeepers at the ragged edges of social and political norms

Those are the functions of government, not business — or they have been until now.

My thought bubble: Remember Colin Powell's "Pottery Barn" rule: "If you break it, you own it"? That was about the Iraq war; this is the digital media equivalent. Over the past decade, Facebook, Google, and their peers broke the public sphere. Now they own it.

This isn't just a problem in the U.S. If anything, as Max Fisher points out in the New York Times, the danger of Facebook as a hate amplifier emerged first in places like Myanmar, Indonesia, India, and Sri Lanka, where angry online mobs have translated all too readily into physical-world violence.

  • Meanwhile, as Global Voices documents, governments in the Middle East have figured out how to weaponize user flagging of harmful content as a tool to suppress dissent.

Be smart: This wave of moderation controversies come alongside an equally vast and consequential series of conflicts over privacy, as the public becomes more aware of, and troubled by, how much personal data social networks, online retailers and ad networks have amassed.

  • Cambridge Analytica was a perfect storm for Facebook because it brought fears of privacy intrusion and concerns over inflammatory content together in one package.

The bottom line: Together, the moderation disputes and privacy debates point to a future in which Facebook and its peers face a tough choice: Get good, fast, at being quasi-governments themselves — or hand the mess back to real governments and return to writing code and making money.

Go deeper: How content moderation defines tech platforms

Go deeper