Meta's new policies open gate to hate
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Under Meta's newly relaxed moderation policies, women can be compared to household objects, ethnic groups can be called "filth," users can call for the exclusion of gay people from certain professions and people can refer to a transgender or non-binary person as an "it."
Why it matters: Meta's move to do away with third-party fact checkers made headlines, but some experts are even more troubled by policy shifts they say could chill online speech and lead to more real-world violence.
Zoom in: Meta's revised policy around hateful conduct (previously referred to as "hate speech") removes some prohibitions entirely, while also making new exceptions that allow people who are women, transgender, gay or immigrants to be targeted in ways prohibited for other groups.
- "We do allow content arguing for gender-based limitations of military law enforcement and teaching jobs," Meta says in its revised policy. "We also allow the same content based on sexual orientation, when the content is based on religious beliefs."
- Elsewhere Meta states: "We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird.'"
The new policy also makes room for people to call for gay and trans people to be excluded from specific places.
- While some prohibitions against slurs and dehumanization remain, Meta removed a rule that specifically barred comparing people to household objects.
- It removed another rule that had prohibited users from describing entire groups of people as "filth."
Zoom out: While the moves are billed as boosting free speech, many Meta-watchers say that the relaxed policies will actually chill speech for those in targeted groups.
- "Harassment drives people to silence themselves or leave online spaces entirely," says Ellery Biddle, editorial and policy lead at Meedan, a nonprofit that helps news and civil society organizations better contribute to public knowledge online.
- Many experts also expressed worry that the types of speech now being permitted will fuel real-world violence, pointing to bomb threats in Boston and elsewhere that followed online attacks on pediatric gender clinics.
- This kind of speech can even promote genocide, as has happened in Myanmar and elsewhere, Biddle said.
Between the lines: Even the language of the new policy itself suggests animus against gay and trans people.
- The policy uses the words "homosexuality" and "transgenderism" — the former is an outdated term, and the latter is used nearly exclusively by opponents of transgender rights.
- "For a legitimate company to employ intentionally anti-LGBT dog whistle language in such a dehumanizing and overly bigoted way in its own hate speech policy is beyond comprehension," said Jenni Olson, senior director for social safety at GLAAD.
Meta also cut a line from its policy that had acknowledged a tie between what happens online and real-world violence.
- While the company kept language that says "we believe that people use their voice and connect more freely when they don't feel attacked on the basis of who they are," the company removed a line that said hate speech "creates an environment of intimidation and exclusion, and in some cases may promote offline violence."
- Biddle said that the changes, taken as a whole, amount to "giving a free pass for cherry-picked issues that align perfectly with culture-war hot topics for the right."
- "Nobody seems to see this move as anything but political," she said. "It shouldn't be surprising, but it is deeply concerning."
Context: Olson noted that Meta is basically giving a roadmap to those who want to express hateful views.
- "Meta itself is proactively stating that Meta allows LGBT people to be characterized as abnormal and mentally ill in a hate speech policy," Olson said. "It's a complete break with best practices in content moderation."
The other side: Meta stresses that many of the policy's protections remain, including a ban on specific threats.
- "We're getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate," Meta's newly promoted policy chief Joel Kaplan said in a post. "It's not right that things can be said on TV or the floor of Congress, but not on our platforms."
- "The problem with complex systems is they make mistakes," CEO Mark Zuckerberg said in a video announcement. "Even if they accidentally censor just one percent of posts, that's millions of people. And we've reached a point where it's just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards once again prioritizing speech."
- A Meta representative declined to comment further or answer a number of questions from Axios, including: when was the policy developed, who was consulted inside and outside of the company and why Meta decided to use the terms "transgenderism" and "homosexuality" instead of standard terms for gender and sexual identity.
What we're watching: While incoming President Trump and allies such as Rep. Jim Jordan (R-Ohio) have praised Meta's move, others have wondered how the changes will go over with advertisers, who typically don't like to see their brands associated with the kinds of content now being permitted.
- "I hope advertisers stand up and walk away," GLAAD CEO Sarah Kate Ellis told Axios.
- Ellis said Meta need look no further than rival X to see what happens when such content is permitted. "Everything's down from a business perspective," she said.
- At CES, X CEO Linda Yaccarino said that 90% of advertisers who'd left X have since returned. X is now a privately held company, which means it no longer has to release quarterly earnings reports or other detailed financials.
What's next: Tech companies will have to face some of these same issues in the context of AI — deciding which thorny questions chatbots will duck, which they will answer, and what they will say.
