Stories

Why good AI should be able to show its work

An illustration of the scales of justice balancing a 1 and a 0.
Illustration: Lazaro Gamio/Axios

The first step to ethical artificial intelligence is teaching the computer to explain its decision making, something known in the field as explainable AI.

Why it matters: Right now many deep learning algorithms don't make it clear how they arrived at their predictions or conclusions. That lack of visibility into the data, steps and calculations that went into an outcome makes it hard to root out bias or other algorithmic errors that could impact results like who gets a loan or how much a factory should produce.