AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.
How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely.
- But an overworked or overly trusting person can fall into a rubber-stamping role, unquestioningly following algorithmic advice.
Why it matters: Over-reliance on potentially faulty AI can harm the people whose lives are shaped by critical decisions about employment, health care, legal proceedings and more.
The big picture: This phenomenon is called automation bias. Early studies focused on autopilot for airplanes — but as automation technology becomes more complex, the problem could get much worse with more dangerous consequences.
- AI carries an aura of legitimacy and accuracy, burnished by overeager marketing departments and underinformed users.
- But AI is just fancy math. Like any equation, if you give it incorrect inputs, it will return wrong answers. And if it learns patterns that don't reflect the real world, its output will be equally flawed.
"When people have to make decisions in relatively short timeframes, with little information — this is when people will tend to just trust whatever the algorithm gives them," says Ryan Kennedy, a University of Houston professor who researches trust and automation.
- "The worst-case scenario is somebody taking these algorithmic recommendations, not understanding them, and putting us in a life or death situation," Kennedy tells Axios.
Automation bias caused by simpler technologies has already been blamed for real-world disasters. And now, institutions are pushing AI systems further into high-stakes decisions.
- In hospitals: A forthcoming study found that Stanford physicians "followed the advice of [an AI] model even when it was pretty clearly wrong in some cases," says Matthew Lungren, a study author and the associate director of the university's Center for Artificial Intelligence in Medicine and Imaging.
- At war: Weapons are increasingly automated, but usually still require human approval before they shoot to kill. In a 2004 paper, Missy Cummings, now the director of Duke University's Humans and Autonomy Lab, wrote that automated aids for aviation or defense "can cause new errors in the operation of a system if not designed with human cognitive limitations in mind."
- On the road: Sophisticated driver assists like Tesla's Autopilot still require people to intervene in dangerous situations. But a 2015 Duke study found that humans lose focus when they're just monitoring a car rather than driving it.
And in the courtroom, human prejudice mixes in.
- In a recent Harvard experiment, participants deviated from automated risk assessments presented to them — they were more likely to decrease their own risk predictions for white defendants but increase them for black defendants.
What's next: More information about an algorithm's confidence level can give people clues for how much they should lean on it. Lungren says the Stanford physicians made fewer mistakes when they were given a recommendation and an accuracy estimate.
- In the future, machines may adjust to a user's behavior — say, by showing its work when a person is trusting its advice too much, or by backing off if the user seems tried or stressed, which can make them less critical.
- "Humans are good at seeing nuance in a situation that automation can't," says Neera Jain, a Purdue professor who studies human–machine interaction. "[We are] trying to avoid those situations where we become so over-reliant that we forget we have our own brains that are powerful and sophisticated."