Uncovering secret government AI

Ilustration: Sarah Grillo/Axios

The criminal justice system has eagerly taken up AI tools for surveillance, policing and sentencing — software that can track people's faces, deploy patrols where crime appears most likely, and recommend whether to grant bail.

What's happening: But these tools are often cloaked in secrecy, so it can be impossible to judge their accuracy, or even know where and how they are being used. Critics say this opens the door to misuse and discrimination.

Driving the news: San Francisco yesterday approved the most restrictive government surveillance regulations in the U.S.

  • The new measure, if it is passed a second time next week, entirely bans official facial recognition in the city — though it does not apply to federal agencies — and requires every department that wants to use surveillance technology to apply for permission.
  • At the other extreme, across the Pacific, is China. It is implementing the most Orwellian surveillance system on the planet, leaning especially hard on facial recognition to identify and track its Uighur minority.

Why it matters: When poorly coded or deployed, AI systems can make huge mistakes or harm some groups more than others. But where faulty facial recognition in Snapchat might mean some people can't use a fun filter, flawed police software can land the wrong people in jail.

  • Because these systems are tightly guarded, outside experts can't check them for bias and accuracy, and the public doesn't know how well they perform.
  • Read this: London police, responding to a freedom of information request, said this month that its facial recognition system misidentified people as criminals a whopping 96% of the time.
  • What's more, experts and watchdogs say they don't actually know where such systems have been deployed around the United States, and defendants are often in the dark about whether advanced surveillance tech was used against them.

"You can't meaningfully build up a criminal defense, or change policies, if you don't know how these tools are being used," says Alice Xiang, a researcher at the Partnership on AI.

San Francisco will soon have its first-ever complete public list of surveillance technology currently in use, says Lee Hepner, legislative aide to San Francisco Supervisor Aaron Peskin, who introduced the measure.

  • "Communities have a right to know whether their governments use dangerous surveillance technology to track their daily lives," says Matt Cagle, an attorney at the ACLU of Northern California who advocated for the measure.
  • Several other cities — including Oakland and Somerville, a city in the Boston area — are considering similar legislation.

The big picture: The uptake of AI in criminal justice mirrors a broad push to automate difficult or sensitive decisions, like hiring and diagnosing diseases from medical scans. But they are often implemented without proper safeguards, says Peter Eckersley, research director at the Partnership on AI.

  • The predictive systems used by nine police departments may have relied on biased data focused disproportionately on minority populations, according to a March report from AI now and New York University. If the report is accurate, this data may be enshrined in new predictive policing systems.
  • Last month, the Partnership on AI studied risk-assessment tools used to inform bail decisions and found that every system currently in use is flawed and should not be used.

What's next: Facial recognition is the most publicly controversial of the various AI tools governments use, and it's the one most likely to be regulated. Companies have asked the federal government to put rules in place for law enforcement use of the technology.

What's next

New York Times endorses Elizabeth Warren and Amy Klobuchar for president

Democratic presidential candidates Sens. Elizabeth Warrenand Sen. Amy Klobuchar at the December 2020 debatein Los Angeles. Photo: Justin Sullivan/Getty Images

The New York Times editorial board has endorsed Sens. Elizabeth Warren and Amy Klobuchar for president, in a decision announced on national television Sunday night.

Why it matters: The board writes in its editorial that its decision to endorse two candidates is a major break with convention that's intended to address the "realist" and "radical" models being presented to voters by the 2020 Democratic field.

Go deeperArrow51 mins ago - Media

What's next in the impeachment witness battle

Sens. Susan Collins (R-Maine) and Lisa Murkowski (R-Alaska). Photo: Tom Williams/CQ-Roll Call, Inc via Getty Images

Senators will almost certainly get to vote on whether or not to call impeachment witnesses. The resolution laying out the rules of the trial, which will be presented Tuesday, is expected to mandate that senators can take up-or-down votes on calling for witnesses and documents.

Yes, but: Those votes won't come until the House impeachment managers and President Trump's defense team deliver their opening arguments and field Senators' questions.

Inside Trump's impeachment strategy: The national security card

White House counsel Pat Cipollone and acting Chief of Staff Mick Mulvaney. Photo: Jabin Botsford/The Washington Post via Getty Images

Trump officials say they feel especially bullish about one key argument against calling additional impeachment witnesses: It could compromise America's national security.

The big picture: People close to the president say their most compelling argument to persuade nervous Republican senators to vote against calling new witnesses is the claim that they're protecting national security.