Jul 6, 2017

Facebook's push to weed out bad content

Noah Berger / AP

Facebook announced last week it will be making an update to reduce "low-quality links" within its News Feed. The company's internal research shows a small group of people routinely share a bulk of links that feature "clickbait, sensationalism and misinformation." The update will de-prioritize these spam-like articles.

Why it matters: The overall effort to clean up the content on Facebook's platform has escalated since the election, following claims that the platform helped perpetuate pro-Trump fake news. While Facebook's business model is based on scale, its vision is to be a driver of social change, which means it needs to balance inclusivity with discretion. Below are a few of the actions the tech giant has taken to align with these goals.

Go deeper: Despite these efforts, many still believe Facebook could do more to respond quickly to user complaints about content and elevate premium content from publishing partners.

Timeline:

  • June 27: Facebook reconfirms their commitment to oppose hate speech by publishing their definition and enforcement policies.
  • June 26: The tech industry (including Facebook, Microsoft, Twitter, and YouTube) came together again to form the Global Internet Forum to Counter Terrorism to make their platforms "hostile to terrorists and violent extremists," promote research, and develop best practices.
  • June 15: In an attempt to counter terrorism online, AI bots and experts joined to review and remove images and language that violated Facebook's policies. They also engaged with partners in the industry as well as across the world to implement anti-hate projects and training.
  • June 5: Facebook's Director of Policy announced they will aim to prevent terrorists from accessing the platform.
  • May 24: The Trending Results page was redesigned to promote diverse content for users after reports of conservative articles being suppressed.
  • May 23: The Global Policy Management department published a post on Facebook's Blog discussing safety and objectivity on the platform, after a week of shocking images from Syria, to highlight the Community Standards held by content reviewers.
  • May 17: Precise algorithms were implemented to reduce clickbait on an individual level. This included tracking exaggerated and misleading headlines and placing them lower in the News Feed.
  • May 10: A rolling update led fewer users to see ads with similar "low-quality web page experiences."
  • April 6: Partnerships were announced to curb the spread of false information on the platform including working with fact-checking organizations and the News Literacy Project. Facebook also helped establish the News Integrity Initiative with the goal of protecting online news readers, and added resources to their Help Center for improving news literacy.
  • January 11: The company launched Facebook Journalism Project, a tool for both journalists and users, to promote media literacy and hinder misinformation campaigns.
  • January 6: Campbell Brown, a former CNN anchor, is hired as a liaison between Facebook and news organizations. Erika Masonhall from NBC later joined the news partnership team.
  • December 15: Four updates were made to users' feeds including new ways to report and flag disputed content, reduce the financial motivation behind clickbait, and rank sensationalist articles lower in feeds.
  • December 5: Facebook announced an industry wide database, including partnerships with Twitter, YouTube, and Microsoft, that tracks and deletes "hashes" or "fingerprints" of potential terrorist content.
  • November 11: The company updated its advertising policies to deter partners from engaging in "ethnic affinity marketing."
Go deeper