Reddit removed 6% of content on its platform in 2020
Reddit said Tuesday the company removed 6% of the content uploaded to its site last year, up from a little under 5% in 2019.
Why it matters: Reddit attributes the uptick in removals in part to policy changes it made last year giving company moderators clearer positions against hate and racism.
- Those guideline changes happened in conjunction with Reddit banning its controversial subreddit channel r/The_Donald, along with 2,000 other subreddit groups and users that violated its content policies aimed at hate speech.
- The company said it began to see less hate speech shortly after the updates.
By the numbers: Of the content taken down, 2% was removed by Reddit staff and 4% was removed by moderators, both Reddit channel moderators and automated tools. The amount of content removed by moderators on Reddit — both human and automated — increased by 61% last year.
- Reddit attributes that increase to more frequent use of its automated content moderation tool (Automod) to remove content from subreddits and a 49% increase in content review submissions compared to 2019.
- Reddit notes that Automods can be particularly helpful in proactively weeding out bad comments and posts before they are reported.
- The company's employees last year removed a total of 82,858 communities for things like hate content and harassment, as well as porn and violent crime.
- In total, about 6% of the more than 3 billion pieces of content uploaded to the platform in 2020 was removed.
The big picture: Reddit says the vast majority (99%) of the content removed was manipulated content, things like spam, community interference, voter manipulation, etc., instead of actual content policy infringements.
- Less than 1% of the content it removed was attributable to things like harassment, hate speech, violent content, etc.
- To that end, most of the accounts Reddit permanently sanctioned were for spam.
The bottom line: Those stats show that it's often much easier for social media and messaging companies to police for bad behavior on their platforms, as opposed to posts and comments that violate content policies. Facebook and other tech giants have said they also use this approach when tackling misinformation.