
Pittsburgh mourners after the synagogue shooting. Photo: Aaron Jackendoff/SOPA Images/LightRocket via Getty Images
The last week proved that hate still abounds in America, and also that social media continues to fuel it.
The bottom line: On social media today, false narratives spread, bigotry intensifies, and sometimes entire plots are hatched. Tech's platforms have become hate-speech amplifiers, and their owners, especially Twitter, haven't shown they have a handle on the problem.
Case #1: Twitter and the mail-bombs
The letter-bomb campaign suspect had been reported to Twitter for having made a direct threat against a political commentator two weeks ago, but Twitter responded that he hadn't violated the company's terms of service.
Twitter took down the account after the accused bomber was in custody and eventually apologized for not taking action on that initial report.
"We made a mistake when Rochelle Ritchie first alerted us to the threat made against her. The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error."— Twitter statement
Case #2: Gab and the Pittsburgh shooting
The Pittsburgh synagogue shooting suspect had been a frequent poster on Gab, a less widely known social media platform that dubs itself as a "free speech" advocate. Gab, which was started as an alternative for extremists who found that Twitter was beginning to banish them, tends to allow violent hate speech as long as there aren't specific attacks directed at particular individuals.
The tech industry took some more concrete steps here. Microsoft had quietly cut ties with Gab a month ago, forcing it to find a new hosting provider. After the shooting, PayPal cut ties with Gab, followed by Stripe. Hosting providers Joyent and BackBlaze also cut services to Gab, pushing the site offline by Sunday evening.
Meanwhile: Twitter critics noted just how much hate speech has remained up on the site.
- Communications professor Jennifer Grygiel pointed out a number of hate-filled screeds that had been on the site for years. While a number of the text-based ones were taken down, some image-based memes remained, indicating that Twitter may have a tougher time screening text that is part of an image, vs. plain-text tweets.
- BuzzFeed's Charlie Warzel noted the spread of a false meme on Twitter suggesting George Soros — a Holocaust survivor — was a Nazi.
- Plus, for a time on Sunday, typing the hashtag symbol and B was leading to auto-complete suggestions that included #burnthejews. (That changed later Sunday, following an inquiry from Axios.)
Advocates of free speech online long argued that it's good to keep extremists' activities out in the open, and sunlight is the best disinfectant. But too often social networks have turned out to be toxic environments where the fumes blot out the light instead.
Go deeper: