Facebook accused of publishing pornography, terrorist content
- Sara Fischer, author of Axios Media Trends

Thibault Camus / AP
Facebook is being accused of knowingly letting pornography and terrorist content sit on its site without removing it, reports The Times.
What happened: Per the report, Facebook failed to remove content that featured ISIS beheadings, pornographic cartoons and glorified hatred, after the content was flagged to moderators. Moderators say the content didn't violate Facebook's community standards, although the standards clearly state: "We remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence."
Where it stands: British regulators cited by The Times say Facebook's failure to remove such images in a timely fashion violates British law. Facebook removed the content after being contacted by The Times and says they are grateful to the publication for making Facebook aware of the controversial content. The Times also reported the incidents to the British police.
The legal details: In the U.S., a portion of a law — Section 230 of the Communications Decency Act of 1996 — is designed to protect tech companies from being held liable for failure to remove indecent content that is automatically distributed on its platform without human oversight. Facebook has grappled with the use of human oversight, as it puts them at risk of making judgement calls that could offend users or advertisers. For example, Facebook came under fire last Spring for its human moderators reportedly suppressing conservative content on its trending topics column. To reduce liability, Facebook later removed human moderators from its trending topics column. (Google recently faced this exact same issue with censoring non-explicit LGTBQ content on YouTube.)