May 16, 2019 - Technology

Uncovering secret government AI

Illustration of a person with different skin tones in with a red targeting box around their face

Ilustration: Sarah Grillo/Axios

The criminal justice system has eagerly taken up AI tools for surveillance, policing and sentencing — software that can track people's faces, deploy patrols where crime appears most likely, and recommend whether to grant bail.

What's happening: But these tools are often cloaked in secrecy, so it can be impossible to judge their accuracy, or even know where and how they are being used. Critics say this opens the door to misuse and discrimination.

Driving the news: San Francisco yesterday approved the most restrictive government surveillance regulations in the U.S.

  • The new measure, if it is passed a second time next week, entirely bans official facial recognition in the city — though it does not apply to federal agencies — and requires every department that wants to use surveillance technology to apply for permission.
  • At the other extreme, across the Pacific, is China. It is implementing the most Orwellian surveillance system on the planet, leaning especially hard on facial recognition to identify and track its Uighur minority.

Why it matters: When poorly coded or deployed, AI systems can make huge mistakes or harm some groups more than others. But where faulty facial recognition in Snapchat might mean some people can't use a fun filter, flawed police software can land the wrong people in jail.

  • Because these systems are tightly guarded, outside experts can't check them for bias and accuracy, and the public doesn't know how well they perform.
  • Read this: London police, responding to a freedom of information request, said this month that its facial recognition system misidentified people as criminals a whopping 96% of the time.
  • What's more, experts and watchdogs say they don't actually know where such systems have been deployed around the United States, and defendants are often in the dark about whether advanced surveillance tech was used against them.

"You can't meaningfully build up a criminal defense, or change policies, if you don't know how these tools are being used," says Alice Xiang, a researcher at the Partnership on AI.

San Francisco will soon have its first-ever complete public list of surveillance technology currently in use, says Lee Hepner, legislative aide to San Francisco Supervisor Aaron Peskin, who introduced the measure.

  • "Communities have a right to know whether their governments use dangerous surveillance technology to track their daily lives," says Matt Cagle, an attorney at the ACLU of Northern California who advocated for the measure.
  • Several other cities — including Oakland and Somerville, a city in the Boston area — are considering similar legislation.

The big picture: The uptake of AI in criminal justice mirrors a broad push to automate difficult or sensitive decisions, like hiring and diagnosing diseases from medical scans. But they are often implemented without proper safeguards, says Peter Eckersley, research director at the Partnership on AI.

  • The predictive systems used by nine police departments may have relied on biased data focused disproportionately on minority populations, according to a March report from AI now and New York University. If the report is accurate, this data may be enshrined in new predictive policing systems.
  • Last month, the Partnership on AI studied risk-assessment tools used to inform bail decisions and found that every system currently in use is flawed and should not be used.

What's next: Facial recognition is the most publicly controversial of the various AI tools governments use, and it's the one most likely to be regulated. Companies have asked the federal government to put rules in place for law enforcement use of the technology.

Go deeper