
Illustration: Sarah Grillo/Axios
We're seeing the beginnings of a tug-of-war at the highest levels of government over how much access people should have to AI systems that make critical decisions about them.
What's happening: Life-changing determinations, like the length of a criminal's sentence or the terms of a loan, are increasingly informed by AI programs. These can churn through oodles of data to detect patterns invisible to the human eye, potentially making more accurate predictions than before.
Why it matters: The systems are so complex that it can be hard to know how they arrive at answers — and so valuable that their creators often try to restrict access to their inner workings, making it potentially impossible to challenge their consequential results.
Driving the news: Two recent proposals are pulling in opposite directions.
- A bill from Rep. Mark Takano, a California Democrat, would block companies that design AI systems for criminal justice from withholding details about their algorithms by claiming they’re trade secrets.
- A proposal from the Department of Housing and Urban Development (HUD) would protect landlords, lenders and insurers that want to use algorithms for important determinations, shielding them from claims that the algorithms unintentionally have a more negative impact on certain groups of people.
These are among the earliest attempts to set down rules and definitions for algorithmic transparency. How they shake out could set rough precedents for how the government will approach the many future questions that will emerge.
Proponents of more access say it's vital to test whether walled-off systems are making serious mistakes or unfair determinations — and argue that the potential for harm should outweigh companies' interest in protecting their secrets.
- Developers regularly invoke trade-secret rights to keep their algorithms — used for key evidence like DNA matches or bullet traces — away from the accused, says Rebecca Wexler, a UC Berkeley law professor who consulted on Takano's bill.
- "We need to give defendants the rights to get the source code and [not] allow intellectual property rights to be able to trump due process rights," Takano tells Axios. His bill also asks the government to set standards for forensic algorithms and test every program before it is used.
The HUD proposal would require someone to show that an algorithmic decision was based on an illegal proxy, like race or gender, in order to succeed in a lawsuit. But critics say that can be impossible to determine without understanding the system.
- "By creating a safe harbor around algorithms that do not use protected class variables or close proxies, the rule would set a precedent that both permits the proliferation of biased algorithms and hampers efforts to correct for algorithmic bias," says Alice Xiang, a researcher at the Partnership on AI.
- HUD is soliciting comments on the proposal until later this month.
The other side: "The goal here is to bring more certainty into this area of the law," said HUD General Counsel Paul Compton in an August press conference. He said the proposal "frees up parties to innovate, take risks and meet the needs of their customers without the fear that their efforts will be second-guessed through statistics years down the line."