Oct 17, 2022 - Technology

Why social media companies moderate users' posts

Illustration of the Twitter bird logo with emoji side eyes

Illustration: Annelise Capossela/Axios

Facebook, Twitter and other online services set rules for users' posts not just to flag individual statements, but more broadly, to ensure they're complying with the law, to help define their businesses and to protect their users.

Driving the news: Public debate over online speech peaked again with Kanye West's ban from Twitter and Elon Musk's willingness to bring Donald Trump back to that service if he becomes its owner. But public understanding of why social networks moderate content remains murky.

Obeying the law: Social media networks have to follow local laws like everyone else.

  • That means, for example, that in many jurisdictions social media companies are obligated to remove child sexual abuse material, terrorist-sponsored content and other banned materials.
  • Different countries have different rules.

Defining themselves: Each platform has its own mission and business model, and each is free to shape what kind of content it wants to distribute and how it wants to organize and display that material.

  • This allows anyone to, say, organize a forum for devotees of an obscure avant-garde musician and ban all conversation on other topics. And it allows Twitter to declare, "We aim to be a global public square" and allow millions of users around the world to discuss (mostly) anything they want.
  • Most content rules on big, open platforms seek to keep postings within bounds of civility and decency so that the ad businesses that support their operation can thrive. Most advertisers shun controversy and conflict.

Protecting users: The thorniest problems on large-scale social networks arise as conflicts between users.

  • Services have no legal obligation to resolve such conflicts but they generally see it as in their interest to maintain order and limit bullying and threats.
  • Many have set rules that, for example, bar speech that aims insults or threats at particular groups. That's what keeps getting Kanye in trouble on Twitter.
  • Big services see enforcing their rules as good online civics that protect individual users and keep their platforms welcoming to all. But individuals who see their posts removed or their privileges taken away can feel censored, and enforcing rules consistently is an almost impossible task.
  • Progressive users often feel that platforms don't enforce their existing rules well enough, while conservative users often argue that the rules censor their point of view.

Case in point: Twitter banned former president Donald Trump after the Jan. 6 attack on the Capitol for breaking its rules and because of the "risk of further incitement of violence."

  • Elon Musk has said he disagrees with that decision because "it alienated a large part of the country and did not ultimately result in Donald Trump not having a voice" since he's still posting on his own network, Truth Social.
  • But under the norms of the kind of content moderation Twitter was practicing, the company wasn't trying to silence Trump — it just wanted to stop Trump from using its service to cause harm.

Between the lines: Content rules are in constant motion because everything that shapes them is always changing — including companies' strategies or ownership, users' behavior, and the effectiveness of particular rules and penalties.

What's next: New laws passed in conservative states like Florida and Texas aim to bar online platforms from removing content based on its "point of view."

  • Critics fear that means social media networks would have to keep their hands off posts advocating Nazism, child abuse and terrorism.
  • The Supreme Court is likely to decide whether these laws can stand. If they do, they could radically change the ground rules of online speech.
Go deeper