Updated Jul 28, 2018 - Technology

What we're reading: How content moderation defines tech platforms

A phone with a lock symbol is seen with a laptop computer with a Facebook login page. Photo: Jaap Arriens/NurPhoto via Getty Images

We know that Facebook, YouTube, Twitter, Instagram, Snapchat, and all the other social media platforms "moderate" the content users post, typically aiming to remove material that violates either a host country's law or the platform's own standards.

The big picture: Moderation is usually understood to be the onerous and thankless cleanup task that these social media giants have had to shoulder as they scaled up to global ubiquity. But the choices companies make about what to delete and who to boot are actually central to their identities, argues scholar Tarleton Gillespie in a new book on the subject.

Gillespie's title, "Custodians of the Internet," points to the ambivalence about the labor of content moderation that's shared by platforms and users. "Custodians" are workers responsible for maintenance and cleanup; they're also keepers of a trust, protectors of a person, place, or institution.

Somebody's got to do it: Either way, it's a tough job — and one that the platform companies prefer to keep mostly out of view. That serves their desire to be seen as neutral arbiters of what's fit to post.

  • But it's increasingly untenable in a world where the platforms have become essential public forums, and where their choices affect livelihoods, elections, and even life or death.

Edge cases: Gillespie assembles a rogue's gallery of the toughest challenges in the annals of moderation:

  • The celebrated Vietnam-war news photo of a crying, naked girl running down a road after a napalm attack, which Facebook moderators repeatedly removed for violating the rules against underage nudity.
  • The long-running fight between Facebook and breastfeeding mothers who wished to post photos but ran afoul of a "no nipples" rule.
  • Violent images used in terrorist recruiting pitches are taken down, but similar images in news coverage (or scholarship about terrorism) may then also face censorship.

Both Twitter and Facebook have faced persistent problems by not being strict enough to satisfy users who have been harassed, yet still triggering outrage from other users who feel they've been censored.

To sort out such dilemmas, social networks typically employ several tiers of labor:

  • a small cadre of company employees who set policy and deal with the toughest calls;
  • larger pools of contractors, overseas workers, and gig-economy laborers who process images and posts at punishing speed;
  • and the entire base of users, who end up on volunteer community patrol as they flag objectionable content with a click.

All this work is an after-thought to the social networks, which devote their resources to building products and selling ads. But marginalizing moderation has only helped mire Facebook, Twitter, and the rest of the social-network business in a swamp of controversy and complaint.

Why it matters: Moderation, the promise that an online space will be managed and made to conform to some set of rules narrower than those that prevail on the open web, isn't a sideshow at all. It's what makes platforms unique, Gillespie argues — different both from publishers who create content and from common-carrier internet service providers who simply transmit it.

The boundaries platforms set on expression are how they distinguish themselves from one another, creating different kinds of spaces with different formal rules and social practices. "Custodians of the Internet" makes a strong case that Facebook and its competitors should start to treat moderation as a defining service rather than a necessary evil.

Go deeper