Nov 5, 2019 - Technology

IBM calls for regulation to avoid facial recognition bans

A man looks into a tablet showing his own face as another man points to it

Facial recognition at Dulles Airport. Photo: Bill O'Leary/The Washington Post/Getty

IBM, one of several Big Tech companies selling facial recognition programs, is calling on Congress to regulate the technology — but not too much.

Why it matters: China has built a repressive surveillance apparatus with facial recognition; now, some U.S. cities are rolling it out for law enforcement. But tech companies worry that opponents will react to these developments by kiboshing the technology completely.

The big picture: IBM's proposal joins calls for federal facial recognition regulations from Microsoft, Amazon and the U.S. Chamber of Commerce.

  • Big Tech is threatened by a yearlong groundswell of bans and proposed restrictions on facial recognition bubbling up in cities like San Francisco and states like Massachusetts.
  • The companies say these moves would cut off beneficial uses of the technology, like speeding up airport security or finding missing children.
  • Yes, but: They stand to gain from keeping the market open.

What's happening: In a white paper shared first with Axios, IBM is calling for what it calls "precision regulation." That means limiting potentially harmful uses rather than forbidding use of the technology entirely.

  • IBM proposes treating various kinds of facial recognition differently. Face detection software, which simply counts the number of faces in the scene, is less prone to abuse than face matching, which can pick specific people out of a crowd.
  • "There will always be use cases that will be off-limits," IBM chief privacy officer Christina Montgomery tells Axios. "That includes mass surveillance and racial profiling."

At issue is public trust in facial recognition. Companies hope that curtailing some uses will rescue the technology from sliding into pariah status.

Details: IBM calls for three policies it says are ready to be implemented immediately.

  1. Requiring notice and consent for people subject to facial recognition authentication, such as in a workplace or on a social media platform.
  2. Implementing export controls that prevent the sale of facial matching technology — the kind police could use to pick wanted criminals out of a crowd.
  3. Mandating that law enforcement disclose facial recognition technology and publish regular transparency reports.

For big companies, overseas business interests can complicate matters.

  • In its white paper, IBM says companies "must be accountable for ensuring they don't facilitate human rights abuses by deploying technologies such as facial matching in regimes known for human rights violations."
  • Earlier this year, BuzzFeed News reported that IBM was among several companies marketing facial recognition in the notoriously repressive United Arab Emirates.
  • IBM says the technology referenced in the BuzzFeed story cannot identify individuals based on their faces. That is, it's not facial matching software.

What they're saying:

  • "We're responsible stewards of technology," Montgomery tells Axios. "We vet client engagements at the highest levels of the company."
  • “If adopted, IBM’s proposal would clear the way for the deployment of this authoritarian technology in our communities, a move opposed by the public, AI experts and democratically elected legislatures across the United States," says Matt Cagle of the ACLU of Northern California.
Go deeper