Jul 24, 2018 - Technology

How Facebook could realistically rein in speech

The Facebook logo superimposed into flaming version of the PC game Minesweeper.

Illustration: Rebecca Zisser/Axios

Mark Zuckerberg says Facebook shouldn't be in the business of removing pages that spread fake news and conspiracy theories on its platform. But there's still plenty of debate inside and outside the company over whether that's the right approach.

The bottom line: There are lots of ideas out there, but no consensus:

Here's what a few experts told Axios they would do if they ran Facebook:

  • Sherry Turkle, MIT professor :
Pivot. Say that the business he thought he founded is not the business it grew into. Companies become organic entities as they find their social ecosystems. He didn’t set out to be a publisher, but that is what he has become — the world’s largest publisher, on an until-now unimaginable scale. An international publishing giant. He is having a reset. This gives him and his company new responsibilities. He is going to take them because the stakes are that large. In the end, democracy and free speech and community need publishers with ethics and values that have grown up in the tradition he grew up in. Everything will follow from that.
  • John Battelle, CEO of NewCo:
I'd immediately acknowledge the problem is one that is too complex, driven by too many stakeholders, for me or my company to adequately resolve. I'd stop trying to figure out how to govern Facebook from within, and ask for help from outside experts (they've done a bit of this) on how best to move forward. I hate to say it, but sometimes a blue ribbon panel/committee is a good idea. 
  • Tarleton Gillespie, author of "Custodians of the Internet," about content moderation on social platforms:
Facebook has been startlingly innovative in helping users say more, see more, find more, like more, friend more. Little innovation, by comparison, has gone to supporting users’ participation in governance and shared decision-making, What if Facebook shared responsibility for content moderation with the public—not just the labor, but the judgment? Given how effective Facebook has become at gleaning users' preferences, to more effectively advertise to them, imagine what would be possible if that same innovative engineering went to gleaning their civic commitments. Facebook could use AI techniques to identify clusters of civic commitments—not to impose one value system on everyone, as they do now, but to make more visible the lines we contest and offer spaces in which we can do so.
  • Jeff Jarvis, CUNY journalism professor and blogger:
I think Zuckerberg needs to empower himself with the knowledge that he can decide what is and is not appropriate on the platform he created not because of law but because of enlightened self-interest for the legacy of him and his company. Then he needs to convene a larger conversation with many stakeholders about what constitutes civil, acceptable behavior: Where's the line? As I wrote in my post, this is not an easy task. 

Where would you draw the line?

  • Battelle: "What is important is not where the line is, but that there is a line. Free speech means different things in different regions (and of course, most of us disagree on the finer points). You must draw a line, and make it very clear. However, this genie got out of the bottle a while back. It's going to be very hard to get it back in. Instead, I suggest this: Malicious falsehoods, hate speech, porn. Of these, only 'malicious falsehoods' has been where Facebook refuses to draw a line."
  • Jarvis: "It's not an easy question with a quick answer, no matter how much people expect Facebook to come up with that quick answer. I'd start here: What ongoing, enforceable standards would be needed to ban not only incitement to violence in Myanmar and Sri Lanka but also to ban the obviously repellant Infowars. I would look at manipulation, threats, bigotry, and conspiracy theories as elements in this judgment."

Should moderation be done more by humans, more by computers or an even mix?

  • Turkle: "He is going to rely on machines because that is a practical element of the reset, but he will need a lot more people, a lot more people who are steeped in the culture where they are working. There are great challenges and great opportunities for Facebook here if instead of shying away from this they step up to it."
  • Battelle: "It has to be a combination. This is not as hard as they are making it out to be, save for the first year or so. But once trained a system of experts plus (algorithms) would work." And, he said, it should involve input from outside Facebook. "Too much is at stake."
  • Jarvis: "Humanity does not scale. Humanity is messy. Humanity is expensive. But I'm afraid it's humanity that's needed to work on this problem. My advice to Facebook is that it needs people empowered not only with rules and community standards but also with trust in their judgment to recognize and act on threats, harassment, bigotry, manipulation, and generally behavior unbecoming to a human being. "

Yes, but: Though not as loud, there are other schools of thought which hold that policing speech is either impractical, not Facebook's job, or too important to trust to Facebook.

Go deeper