Photo: Alberto Pezzali/NurPhoto via Getty Images
Facebook will expand its fact-checking operation to vet photos and videos, the company announced Thursday.
Why it matters: Advances in technology are making it easier for bad actors to manipulate real videos to make it appear that someone did or said something they did not. Experts predict that these very sophisticated forms of doctored media, called "deepfakes" are the next frontier of misinformation.
What's new: To date, most of Facebook's fact-checkers have focused on reviewing articles. Now, Facebook says it is expanding fact-checking for photos and videos to all of its 27 fact-checking partners in 17 countries around the world. They also are regularly on-boarding new fact-checking partners.
How it works: Facebook says it's built a machine learning model that uses various "engagement signals," including reports from users, to identify potentially false content. They send false content to fact-checkers for review.
- It will also use a tactic called optical character recognition (OCR) to extract text from photos to compare that text to headlines from fact-checkers’ articles.
- Facebook has separated fake content into three categories based on research: (1) Manipulated or Fabricated, (2) Out of Context, and (3) Text or Audio Claim.
Our thought bubble: Timing will be a challenge here. Fact-checking review processes take time to ensure no authentic, standard-bearing content is unnecessarily removed. But often viral videos and photos can spread very quickly before they are flagged, evaluated and removed.