Apr 28, 2022 - Technology

What Musk's free-speech Twitter could unleash

Illustration of the Twitter logo falling into a hypnotic spiral.

Illustration: Shoshana Gordon/Axios

Elon Musk's pledge to allow any speech on Twitter that doesn't break the law would open the door to a pandemonium of objectionable and harmful content — from gory videos to efforts to mislead voters to promotions of phony COVID cures.

Why it matters: Even much smaller social networks that aimed to minimize content moderation have found that an "anything goes if it's legal" policy quickly devolves into a miasma of violence, spam, fraud and bullying.

State of play: Twitter's current rules lay out a wide range of prohibited types of content. Some, like violent threats or child exploitation, are likely to still be prohibited under Musk's policy.

But much of what Twitter now bars is not explicitly illegal. Here are some kinds of content Twitter's rules now prohibit that could return under Musk:

Election misinformation

  • Twitter's current rules say, "You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process."
  • MyPillow CEO Mike Lindell lost his Twitter account after repeatedly claiming former President Trump had won the 2020 election.

Medical misinformation

  • Twitter has a policy against COVID-19 related misinformation. Medical misinformation, though, is not broadly illegal, and limiting moderation to the terms of the law could open up a host of false claims on everything from cancer cures to the safety of childhood vaccines.
  • Twitter permanently suspended the personal account of Rep. Marjorie Taylor Greene (R-Ga.) in January for repeated violations of its COVID misinformation policies, including her false claim of "extremely high amounts of COVID vaccine deaths."

Deepfakes and manipulated media

  • "You may not deceptively share synthetic or manipulated media that are likely to cause harm," Twitter's rules say. "In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context." Most U.S. law does not yet address this issue.

Impersonating others

  • Twitter says "You may not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter."
  • In some cases, such as pretending to be someone else to commit fraud, such behavior could be illegal, but there are plenty of instances where it would not violate the law.

Platform manipulation and spam

  • "You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter," according to the site's rules.
  • Musk has criticized bots and other types of inauthentic behavior, but most instances of them aren't specifically illegal.

Targeted attacks and hateful conduct

  • Twitter's hateful conduct policy prohibits a wide range of behavior. Some, such as specific threats of violence, may be illegal, but Twitter's policy goes far further.
  • Among the practices that are not allowed on Twitter are displaying logos of hate groups, dehumanizing a group of people based on a wide range of characteristics including race, gender, religion or sexuality, as well as intentionally misgendering someone.
  • Conspiracy theorist Alex Jones lost his Twitter account in 2018 after what the company said were repeated instances of abusive behavior.
  • One-time Trump adviser Steve Bannon lost a Twitter account in 2021 for advocating the beheading of FBI director Christopher Wray and Anthony Fauci.

Graphic violence and adult content

  • Twitter currently does not allow media that is "excessively gory" or that depicts sexual violence. That means there's plenty of very graphic video currently prohibited that is not explicitly illegal.

Non-consensual nudity

  • "You may not post or share intimate photos or videos of someone that were produced or distributed without their consent." Some states have laws prohibiting such videos as "revenge porn."

Suicide or self-harm

  • Twitter's rules say that "you may not promote or encourage suicide or self-harm." While harassing someone to pressure them to hurt themselves can be illegal, glorifying or encouraging suicide broadly are not. The same holds for glorifying anorexia and other eating disorders.

Perpetrators of violent attacks

  • Twitter, like Facebook and others, often removes the accounts of individuals who commit mass murders or other terrorist attacks, as well as "manifestos or other content produced by perpetrators." This policy goes beyond what is required by law.

Be smart: Laws vary from country to country. Pro-Nazi content, for example, is legal in the U.S. but illegal in Germany.

  • The EU has already warned that it expects a Musk-run Twitter to follow its rules.

The big picture: The content most often removed from Twitter includes graphic violence, hateful conduct, abuse and harassment or promotion of suicide and self harm, according to Karen Kornbluh, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund.

  • According to Twitter's own reports, the company removed 5.2 million pieces of content during the first half of 2021 for violating rules in those four categories.
  • "There's a real danger that if Twitter changes its policy to only remove content that violates the law, we will see even more extremism at home and censorship abroad," Kornbluh said.
Go deeper