
Illustration: Rebecca Zisser/Axios
The online spread of the Christchurch mosque killer's sickening first-person video divided experts, industry insiders and the broader public into two opposite camps: Some saw the debacle as proof that Facebook and YouTube can't police their platforms. Others saw it as evidence that they won't.
Why it matters: How we define the platforms' struggle to block the New Zealand shooter's video will shape how we respond to the problem. Either way, Facebook and YouTube don't come off well.
Driving the news:
- Late Monday night, Facebook posted new details about the video, reporting that the shooter's original live stream was viewed fewer than 200 times in total. None of those viewers flagged it to moderators. The first report about the video came in 12 minutes after the live stream ended.
- "Before we were alerted... a user on [troll site] 8chan posted a link to a copy of the video on a file-sharing site," Facebook says. Once Facebook took down the original, users began reposting copies.
- Facebook previously reported that in the 24 hours after the shooting, it removed 1.5 million copies of the video — 1.2 million of which were blocked as they were uploaded.
- YouTube didn't report numbers, but a Washington Post account said that despite the video platform's efforts to expand human moderation and automated systems, "humans determined to beat the company’s detection tools won the day."
One widely held view is that Facebook and YouTube are simply too big to monitor and control, even with the legions of human moderators they employ and the AI-driven recognition tools they are beginning to deploy.
- "Social-media platforms were eager to embrace live streaming because it promised growth. Now scale has become a burden," Neima Jahromi wrote in The New Yorker.
- In this picture, outnumbered moderators will always be a step behind masses of determined users, and the whack-a-mole game will never end.
- As platforms get better at identifying and blocking particular classes and instances of undesirable content, the content's proponents will find new tactics for modifying, hiding and redistributing the material.
Another view holds that Facebook and YouTube have both repeatedly shown their ability to police their vast online estates when given no alternative:
- They've taken strong measures against child pornography, and indeed kept most kinds of porn at bay.
- They've moved forcefully to keep ISIS recruiting videos from publicly circulating.
- They've worked to eliminate access to Nazi propaganda in Germany, where it's outlawed.
- They've cracked down effectively on distribution of copyrighted materials via their services.
- If strong enough legal, financial and socio-political incentives have done the trick in these areas, the argument goes, surely Facebook and YouTube can also take effective action against violent right-wing extremists.
These two scenarios paint two very different pictures of what's going on.
- In one, platform managers are playing a Sisyphean delete-and-block game against persistent and inventive opponents.
- In the other, companies that prioritize engagement metrics are protecting their business interests by failing to limit offensive content — except when media coverage and ad boycotts make action unavoidable.
Be smart: Hard as the problem is, people are going to keep pushing the platforms to solve it. And there are plenty of other steps Facebook and YouTube could take.
- For instance, during a crisis they could suspend real-time uploads, or temporarily block those coming from new or unverified accounts.
- In a Twitter thread, Homebrew's Hunter Walk (a former YouTube exec) proposed methods for YouTube to protect freedom of speech while curtailing "freedom of reach."
Go deeper: The real tech regulators