IBM calls for regulation to avoid facial recognition bans
Facial recognition at Dulles Airport. Photo: Bill O'Leary/The Washington Post/Getty
IBM, one of several Big Tech companies selling facial recognition programs, is calling on Congress to regulate the technology — but not too much.
Why it matters: China has built a repressive surveillance apparatus with facial recognition; now, some U.S. cities are rolling it out for law enforcement. But tech companies worry that opponents will react to these developments by kiboshing the technology completely.
- Big Tech is threatened by a yearlong groundswell of bans and proposed restrictions on facial recognition bubbling up in cities like San Francisco and states like Massachusetts.
- The companies say these moves would cut off beneficial uses of the technology, like speeding up airport security or finding missing children.
- Yes, but: They stand to gain from keeping the market open.
What's happening: In a white paper shared first with Axios, IBM is calling for what it calls "precision regulation." That means limiting potentially harmful uses rather than forbidding use of the technology entirely.
- IBM proposes treating various kinds of facial recognition differently. Face detection software, which simply counts the number of faces in the scene, is less prone to abuse than face matching, which can pick specific people out of a crowd.
- "There will always be use cases that will be off-limits," IBM chief privacy officer Christina Montgomery tells Axios. "That includes mass surveillance and racial profiling."
At issue is public trust in facial recognition. Companies hope that curtailing some uses will rescue the technology from sliding into pariah status.
- According to a Morning Consult/Politico poll this summer, two-fifths of Americans support the technology, down from nearly half of Americans a year before.
- "Absent the trust, you're going to see calls for bans," Montgomery says.
Details: IBM calls for three policies it says are ready to be implemented immediately.
- Requiring notice and consent for people subject to facial recognition authentication, such as in a workplace or on a social media platform.
- Implementing export controls that prevent the sale of facial matching technology — the kind police could use to pick wanted criminals out of a crowd.
- Mandating that law enforcement disclose facial recognition technology and publish regular transparency reports.
For big companies, overseas business interests can complicate matters.
- In its white paper, IBM says companies "must be accountable for ensuring they don't facilitate human rights abuses by deploying technologies such as facial matching in regimes known for human rights violations."
- Earlier this year, BuzzFeed News reported that IBM was among several companies marketing facial recognition in the notoriously repressive United Arab Emirates.
- IBM says the technology referenced in the BuzzFeed story cannot identify individuals based on their faces. That is, it's not facial matching software.
What they're saying:
- "We're responsible stewards of technology," Montgomery tells Axios. "We vet client engagements at the highest levels of the company."
- “If adopted, IBM’s proposal would clear the way for the deployment of this authoritarian technology in our communities, a move opposed by the public, AI experts and democratically elected legislatures across the United States," says Matt Cagle of the ACLU of Northern California.