Jul 17, 2020 - Economy & Business

Advertising giants agree to evaluate mutual definition of hate speech

Illustration of a hand holding back a torrent of speech bubbles.

Illustration: Aïda Amer/Axios

The Global Alliance for Responsible Media (GARM), an industry body consisting of the world's biggest advertising companies — including a few Big Tech companies — has agreed to evaluate some issues collectively, including deciding how to better define hate speech across the entire industry.

Why it matters: Social media companies have faced increased scrutiny for how they moderate content on their platforms. This is a step towards tackling the issue together, despite the fact that it's mostly a formality for now.

The backdrop: GARM was created last year at the annual Cannes Lions Festival to tackle brand safety in advertising.

  • It consists of a combination of the world's biggest advertisers, like Procter & Gamble and Unilever, as well as executives from the world's biggest agencies. Tech and media companies like NBCUniversal, Google/YouTube, Twitter and Facebook are also a part of the group, as well as the industry's biggest associations, like the Association of National Advertisers and Interactive Advertising Bureau.

What's happening: In a note to advertisers, Facebook's VP of Global Marketing Solutions Carolyn Everson said that the industry, via GARM, has settled on four areas to take immediate action on: definitions of harmful content like hate speech, measurement, audits and suitability controls.

  • The group says they agree on 11 standard definitions of harmful content that had been recently agreed to by GARM's brand safety working group "with immediate focus on Hate Speech + Acts of Aggression" and plan to align on those definitions next month.
  • In her note, Everson said she would be providing an update on Facebook's end about how the tech giant limits ads from appearing next to "hate speech" or "acts of aggression." She says that meetings have already taken place with Facebook's policy team and GARM on the issue.

Yes, but: Tech platforms still maintain their right to more narrowly define and police hate speech individually.

  • A spokesperson for YouTube says that the company "remains committed to working with GARM and the industry to identify and treat harmful content in a consistent way in order to build a more sustainable and healthy digital ecosystem for everyone," but it still reserves the right to enforce its own policies around hate speech, including defining it more broadly in some cases.
  • A spokesperson for Twitter says the company "is an active GARM member, supports the movement towards industry standards and frameworks for content monetization, and is committed to ongoing work with industry leaders to find solutions to promote healthy public conversation."

Between the lines: Everson often sends emails like this to advertisers. She is considered the face of Facebook's sales and advertising teams, and often serves as a leader within the industry to address tough issues.

  • GARM, which is a part of the World Federation of Advertisers, has for months been working with big-name advertisers to come up with standards to address brand safety.
  • In light of the recent reckoning around systemic racism in the U.S., the group has focused more heavily in recent weeks on addressing and defining harmful content, including hate speech, across the advertising community.

Worth noting: GARM hosts working groups to discuss issues in advertising often. The group's policy recommendations aren't rules that every member must follow, but they are agreed-upon steps that the industry should take to tackle and define pressing problems.

Go deeper