Facebook CEO Mark Zuckerberg; Dan Rose, V.P., partnerships; and COO Sheryl Sandberg at Allen & Co.'s tech/media conference in Sun Valley, Idaho, last week Photo: Drew Angerer/Getty Images
Mark Zuckerberg's willingness to allow Holocaust deniers to post on Facebook has reignited a consequential debate over where tech companies should draw the line on free speech.
What happened: Zuckerberg faced instant backlash yesterday after saying in a podcast interview with Recode's Kara Swisher that the company would not take down a post denying the Holocaust because of the possibility that the user did not intentionally get the facts about the event wrong.
- What he said: "I'm Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong."
- Facebook's approach to free speech, and most areas of regulation on its platform, is to consider the greatest possibility of good intent by all users.
Zuckerberg later sent a clarification email to Swisher: "I enjoyed our conversation yesterday, but there’s one thing I want to clear up. I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that."
- Key quote: "These issues are very challenging but I believe that often the best way to fight offensive bad speech is with good speech."
- The company also clarified to reporters last evening that it does not tolerate misinformation that incites violence.
Why it matters: Facebook's policy puts itself in the untenable position of constantly defending some of the most offensive content imaginable, including some that is widely and unequivocally accepted as false by most cultures.
Sources familiar with Zuckerberg's thinking say that the 34-year-old engineer-turned-corporate executive approaches regulation on his platforms from an incredibly rational lens:
- That means most policies are designed to address an incredibly wide range of user intentions, including ones that may seem outrageous to the majority of people that use the product.
Facebook tries to curb the spread of misinformation by down-ranking the pages and posts of users that post misinformation — or if they do it often enough, removing their ability to advertise, or removing the page altogether.
- The company won’t disclose how much misinformation one would need to post to be removed or punished, in fear that bad actors would game the system if that information was publicly available.
- Critics argue that this policy doesn't go far enough, and that Facebook should instead more closely adhere to the editorial scrutiny used by most media companies, since more than half of U.S. adults use Facebook to get news.
Be smart: The policy makes Facebook a target for bad actors who know they can post false and often offensive information on the platform without necessarily being removed — or in some cases, punished at all.
- While Facebook uses artificial intelligence and human review of flagged posts to weed out bad content, as we have seen time and time again, those systems are nowhere near perfect, given the scale of content that is uploaded to the platform every minute.