Nov 28, 2017

Facebook says it deletes most terror content before it's flagged

Illustration: Rebecca Zisser / Axios

Facebook says nearly all ISIS and Al Qaeda-related terror content is removed through machine learning before anyone flags it, and most copies of that content (83%) are removed within an hour of being uploaded.

Why it matters: In the past, Facebook has been criticized for not moving quickly enough to remove bad content once it has been uploaded or flagged to human reviewers. Critics, for example, cried foul when Facebook took over two hours to remove videos of a Cleveland murderer after the initial video, in which the murderer stated his intentions, was uploaded.

The main way Facebook removes such content is by creating a database of video, images and text that can be machine read and automatically blocking any matches from living on the platform.

The catch: Terror content is not all the same. AI can find matches, but "a system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda," Facebook says. So the company is focusing its efforts on the terrorist organizations that pose the biggest global threat: ISIS and Al Qaeda. The hope is to "expand the use of automated systems to detect content from regional terrorist organizations too," the company says.

Our thought bubble: A lot of terror content that spreads online these days is domestic. While people are certainly inspired by ISIS and Al Qaeda-related posts, there's still a long way to go to ensure that all terrorist content is removed as thoroughly and as quickly.

Go deeper