Google plans to have 10,000 human content reviewers in 2018

- Sara Fischer, author ofAxios Media Trends

Youtube Rego Korosi via Flickr CC
Google hopes to hire at least 10,000 human content reviewers in 2018, mostly focused on YouTube videos, YouTube CEO Susan Wojcicki wrote in a blog post Monday. The effort is part of an increased transparency push into its content moderation practices after advertisers fled the platform and YouTube cut thousands of ads for having run adjacent to videos of "scantily clad children."
Why it matters: YouTube has faced two advertiser boycotts this year for having ads appear next to terrorist videos and inappropriate children's content. With each boycott, the platform has vowed more transparency and human review of its content. Its rival Facebook is going through a similar process of reacting to unforeseen consequences of its open platform that faces little human content moderation up front.
YouTube says that starting next year it will create a regular report that will provide more aggregate data about the flags it receives and the actions it take to remove videos and comments that violate its content policies.
- It will also expand the network of academics, industry groups and subject matter experts whom it can learn from in order to better understand emerging issues and develop more tools to help bring transparency around flagged content.
- While it's seen success in using machine-learning tools to weed out terrorist content, YouTube says moving forward it will broaden this effort and will integrate more machine-learning technology across other challenging content areas, like child safety and hate speech.
For creators concerned about bad ads running against their content or being inappropriately flagged, YouTube says it will so "a better job" determining which channels and videos should be eligible for advertising by applying stricter criteria, more manual curation and "significantly ramping up" its team of ad reviewers to ensure ads are only running where they should.
Following Facebook's footsteps, YouTube says that with machine learning, it's now able to remove 98% of the videos for violent extremism that are flagged by our machine-learning algorithms. Since June, Wojcicki says YouTube's trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content.
Our thought bubble: Facebook and Google keep adding more and more human reviewers as content crises compound but neither company says it is a media company. Both argue that these moderators don't make editorial decisions, but rather hold flagged content accountable to the each company's standards. But the standards are somewhat vague and written to address unforeseen use cases of their platforms, which means that as they scale, these conflicts will only continue to pop up.