Social media companies take on terrorism in a post 9/11 world

- Sara Fischer, author ofAxios Media Trends

Illustration: Rebecca Zisser
At the time of the September 11th attacks, the world's glimpse into terrorist thinking and communication was primarily through group-sponsored terrorist videos, often physically seized by authorities, and carefully edited for mass audiences by the television networks. But today, so much terrorist propaganda is easily accessible online and through social media, making it harder to censor, monitor and control.
Why it matters: As the Internet becomes more accessible, the role of monitoring terrorist content is spreading from governments to technology companies, and this year has seen some of the most aggressive efforts by both groups to get the situation under control.
Government pressure:
- China announced this summer that it's investigating its own tech companies, like Tencent and Baidu, for giving users an avenue to spread violence and terror.
- Government campaigns proposed earlier this year in the UK, France and Germany intend to place legal liability on tech companies for failing to control the presence of terrorist-related content on their platforms.
- Regulatory pressure on the tech community is rising as lawmakers begin to question the power of major tech monopolies.
Brand pressure:
- An uptick in pressure from some of the world's biggest advertising agencies caused brands to abandon YouTube in droves this spring. YouTube's direct ad spend was reportedly down 26% in Q2 as a result.
- YouTube quickly announced changes to its policies to ensure advertisers and users felt its platform was a safe environment for content consumption and messaging. (More below.)
How tech companies are weeding out threats: Tech companies built on open platforms, like Google, Facebook and Twitter, have ramped up action in the past several months to ensure that terrorist accounts are blocked and terrorist content is removed.
Their strategies are often two-fold: 1) Improve rapid-response efforts to remove terrorist content or accounts when they are reported, and 2) invest in artificial intelligence technologies that can block terrorist content before it's ever uploaded.
- Twitter has perhaps been the most aggressive about blocking individual accounts. According to a Twitter spokesperson, the company has suspended 636,248 accounts since mid-2015 and of them, "more than 360,000 accounts for threatening or promoting terrorist acts, primarily related to ISIS," since the middle of 2015.According to Twitter's most recent Transparency Report, 74% of accounts suspended in the second half of 2016 were surfaced by internal, proprietary spam-fighting tools.
- Facebook revealed in a memo published this summer that it's turning to artificial intelligence to help stop terrorism from spreading on its site. "The company deploys artificial intelligence to detect when people try to repost photos and videos associated with terrorism ... and when there are "new fake accounts created by repeat offenders," Axios' David McCabe reported in June.Facebook does not reveal the number of accounts its suspends due to terrorism.
- YouTube is mounting an intervention when users search for terms linked to extremism. Per McCabe, the company will "display a playlist of videos debunking violent extremist recruiting narratives."
- Facebook, Twitter, Microsoft and YouTube will be involved in a new coalition that will, according to YouTube, make the companies' "hosted consumer services hostile to terrorists and violent extremists." The companies will share information with outside groups, work on technology to address extremism and "commission research to inform our counter-speech efforts and guide future technical and policy decisions around the removal of terrorist content."
One important caveat: Axios' Mike Allen highlighted an important finding from Professors Peter Neumann and Shiraz Maher, of the International Centre for the Study of Radicalisation in the BBC earlier this year: "The internet plays an important role in terms of disseminating information and building the brand of organisations such as [the Islamic State], but it is rarely sufficient in replacing the potency and charm of a real-world recruiter."