Jun 12, 2019

A program to mask suspects' race for better charging decisions

Photo of a San Francisco police car in front of a courthouse

Photo: Justin Sullivan/Getty

Next month, the San Francisco District Attorney's office will begin using a computer program developed at Stanford to strip police reports of names, neighborhoods and other proxies for race like eye color or hairstyle.

Why it matters: The effort is meant to remove bias. Prosecutors decide whether to charge suspects based on police reports and evidence — but they're liable to be swayed by their own biases, which could lead them to bring charges more often against people of color.

The big picture: The U.S. criminal justice system is chock-full of racial disparities. Our prisons are disproportionately black and Hispanic — the two groups make up 56% of incarcerated people, but only 28% of the U.S. adult population.

  • Among other things, it's the result of countless layers of systemic bias, from overpolicing in neighborhoods of color to sentencing disparities.
  • The SFDA–Stanford project addresses one link in the chain: prosecutorial decisions.

"We want to make sure that when we're charging somebody, race doesn't come into it," a spokesperson for the SFDA's office tells Axios. "If we're able to take implicit bias out of even 90% of these cases, that's a huge achievement."

How it works: The system replaces racial proxies with generic placeholders — Person 1, Officer 2, Neighborhood 3. The idea is that a prosecutor reading a sanitized report will focus on the narrative rather than being influenced by their own preconceptions.

  • It's not fancy AI — rather it's more like an advanced search-and-replace tool — and that's on purpose. Sharad Goel, the head of the Stanford team behind the project, says a simpler, rules-based system is more predictable, consistent and interpretable.
  • In the grand scheme, it's a modest step. After reading the edited report and making an initial charging decision, prosecutors must read the full, unmasked report. They can then change their decision, but the switch will be recorded and later analyzed.

What's next: Later this year, the Stanford team will take a bigger leap. It's working on a machine learning program that will flag cases that, based on the DA's history, are most likely to be discharged.

  • As with any system that is based on past patterns, there's a danger of perpetuating previously biased practices.
  • But a 2017 study indicated that racial disparities in San Francisco's criminal justice system are largely not the result of the DA's charging decisions.
  • Goel says the tool's results will be regularly audited to make sure it's not disproportionately recommending discharge for some groups over others.
Go deeper