
Illustration: Aïda Amer/Axios
A court hands down an opinion: thoughtfully reasoned, forcefully argued, eminently fair. It’s lauded widely — until it comes out that the author wasn't a renowned judge but rather an advanced artificial intelligence system.
The big question: Should the opinion be rejected because of its source, even if it’s indistinguishable from — or better than — what a human would have produced?
Even though today’s AI is woefully unprepared for the job, legal scholars are already debating whether computers should someday be entrusted with enormous legal decisions.
- Experts differ on whether it’s sufficient for an AI judge to perfectly emulate a human one, given that it’s not actually capable of "thinking" like a person.
- The core issue is AI’s decision-making process — or lack thereof.
Intelligent is as intelligent does, argues Eugene Volokh, a UCLA law professor, in a forthcoming paper for the Duke Law Journal.
- A computer should be accepted if a panel of humans thinks the opinions it writes are on par with or better than those written by a human judge — the legal version of the Turing Test, Volokh argues.
- "It becomes hard to say why we should prefer judges who are proven to produce a worse work product," Volokh said in a lecture at Stanford last week.
- Beyond smarts, said Volokh, "wise, merciful, compassionate, and judicious is as it does." It doesn’t matter if a computer can or cannot possess these human traits — it only matters that a human would say that they are reflected in what it produces.
But University of Ottawa professor Ian Kerr says how a decision is arrived at matters as much as the outcome.
- "Everything is in the practice," says Kerr, who co-wrote a 2014 article on AI judges with fellow University of Ottawa law professor Carissima Mathen. "The process is the point of the exercise."
Even when an AI system seems to make thoughtful judgments, it is actually only piecing together elements of cases in its database.
- It’s the same way AI-generated artwork or music is really just an amalgam of human-created work, combined in a novel way.
- To Kerr, this distinction matters. Citing British legal philosopher H.L.A. Hart, a deceased Oxford professor, he says that machines can know what is — drawing on training datasets — but not what ought to be, which requires a broader moral understanding of the world.
This means an AI judge will at its best perfectly apply existing law — but not push it forward, like Supreme Court justices do in landmark cases.
- Without a sense of changing societal norms, a computerized judge would be stuck in the past, capable only of regurgitating the conventions embedded in the common law that existed at the time of its creation.
- "An algorithm could've given us Dred Scott or Korematsu," said Ryan Calo, a law professor at the University of Washington, referring to a pair of Supreme Court decisions now considered morally wrong. But it would not know, decades later, that it had misjudged.
- In this way, a mechanical judge would be extremely conservative, Calo said, interpreting the law’s text without considering any outside factors at all.
Ultimately, Volokh agrees, people will continue to have a place in the justice system even if computers are proven to write better opinions.
- Human judges may need to step in when a case calls for developing the law, he said.
- Taking away people entirely would undermine a key element of the justice system, says Calo: the human touch. The dignity of having case heard by humans is a necessary part of the process of justice, he said.