Feb 10, 2020 - Technology

Clearview brings privacy concerns from facial recognition into focus

illustration of clearview

Illustration: Aïda Amer/Axios

People warning about the potentially chilling collision of big data sets and emerging technologies can now point to Clearview, the secretive facial recognition startup that scraped images from some of the largest public internet sites to create a database now used by hundreds of law enforcement agencies across the country.

Why it matters: Facial recognition tools have already raised privacy concerns in the U.S. and abroad, particularly when they're used by government, but the controversy over Clearview has shown that both industry and law enforcement are moving faster than the debate.

  • The issue came to light in mid-January when the New York Times published a startling exposé by Kashmir Hill on the company and its database of more than 3 billion images taken from various web sites.

Since then there have been:

Clearview remains a tiny company that was started with seed money from Peter Thiel, the Trump-supporting tech entrepreneur who sits on Facebook's board, and co-founded by a former aide to New York Mayor Rudy Giuliani.

  • But Clearview told the Times its database was in use in over 600 law enforcement agencies.

Between the lines: In the weeks since the initial Times report, the debate over Clearview, and face recognition technology more broadly, has continued and intensified.

  • Proponents cite the crimes, including child sexual abuse, that are being prevented or thwarted by using the technology.
  • Opponents warn of the inaccuracy of current technology, especially among women and people of color, and highlight the company's international ambitions as reason for additional worry. Critics argue that for every legitimate benefit, there are even scarier uses, such as when the technology finds its way into the hands of authoritarian regimes — or stalkers.

What they're saying:

  • CEO Hoan Ton-That says his company has the right to use photos published on the internet, making a case similar to the one Google and other search engines made years ago to justify their businesses. "The way we have built our system is to only take publicly available information and index it that way," he said in a CBS This Morning interview.
  • ACLU Northern California's Matt Cagle: "Clearview AI built its facial recognition system by exploiting your 'publicly available' social media profile photos. They were able to do this because most social media companies make profile photos public by default and deprive users of the option to hide them."

Our thought bubble: We clearly need a conversation about the rules and laws that should govern use of facial recognition. We also need one on what rights people should have over their own likenesses.

  • That Clearview's tech could be used so widely with so little notice is one concern. Another is that we might not have much recourse even now, after its exposure.

Go deeper: GOP congressman wants answers from Clearview

Go deeper