Sign up for our daily briefing

Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Twin Cities

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa Bay news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa Bay

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Charlotte news in your inbox

Catch up on the most important stories affecting your hometown with Axios Charlotte

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.

How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely. But an overworked or overly trusting person can fall into a rubber-stamping role, unquestioningly following algorithmic advice.

Why it matters: Over-reliance on potentially faulty AI can harm the people whose lives are shaped by critical decisions about employment, health care, legal proceedings and more.

The big picture: This phenomenon is called automation bias. Early studies focused on autopilot for airplanes — but as automation technology becomes more complex, the problem could get much worse with more dangerous consequences.

  • AI carries an aura of legitimacy and accuracy, burnished by overeager marketing departments and underinformed users.
  • But AI is just fancy math. Like any equation, if you give it incorrect inputs, it will return wrong answers. And if it learns patterns that don't reflect the real world, its output will be equally flawed.

Automation bias caused by simpler technologies has already been blamed for real-world disasters.

  • In 2016, a patient was prescribed the wrong medication when a pharmacist chose a similarly named drug from a list on a computer. A nurse noticed — but administered the meds anyway, assuming the electronic record was correct. The patient had heart and blood pressure problems as a result.
  • In 2010, a pipeline dumped nearly 1 million gallons of crude oil into Michigan wetlands and rivers after operators repeatedly ignored "critical alarms." They were desensitized because of previous false alarms, according to a 2016 post-mortem report — showing another threat from over-reliance on machines.

"When people have to make decisions in relatively short timeframes, with little information — this is when people will tend to just trust whatever the algorithm gives them," says Ryan Kennedy, a University of Houston professor who researches trust and automation.

  • "The worst-case scenario is somebody taking these algorithmic recommendations, not understanding them, and putting us in a life or death situation," Kennedy tells Axios.

Now, institutions are pushing AI systems further into high-stakes decisions.

  • In hospitals: A forthcoming study found that Stanford physicians "followed the advice of [an AI] model even when it was pretty clearly wrong in some cases," says Matthew Lungren, a study author and the associate director of the university's Center for Artificial Intelligence in Medicine and Imaging.
  • At war: Weapons are increasingly automated, but usually still require human approval before they shoot to kill. In a 2004 paper, Missy Cummings, now the director of Duke University's Humans and Autonomy Lab, wrote that automated aids for aviation or defense "can cause new errors in the operation of a system if not designed with human cognitive limitations in mind."
  • On the road: Sophisticated driver assists like Tesla's Autopilot still require people to intervene in dangerous situations. But a 2015 Duke study found that humans lose focus when they're just monitoring a car rather than driving it.

And in the courtroom, human prejudice mixes in.

What's next: More information about an algorithm's confidence level can give people clues for how much they should lean on it. Lungren says the Stanford physicians made fewer mistakes when they were given a recommendation and an accuracy estimate.

  • In the future, machines may adjust to a user's behavior — say, by showing its work when a person is trusting its advice too much, or by backing off if the user seems tried or stressed, which can make them less critical.
  • "Humans are good at seeing nuance in a situation that automation can't," says Neera Jain, a Purdue professor who studies human–machine interaction. "[We are] trying to avoid those situations where we become so over-reliant that we forget we have our own brains that are powerful and sophisticated."

Go deeper

1 hour ago - World

Putin foe Navalny to be detained for 30 days after returning to Moscow

Russian opposition leader Alexey Navalny. Photo: Oleg Nikishin/Epsilon/Getty Images

Russian opposition leader Alexey Navalny has been ordered to remain in pre-trial detention for 30 days, following his arrest upon returning to Russia on Sunday for the first time since a failed assassination attempt last year.

Why it matters: The detention of Navalny, an anti-corruption activist and the most prominent domestic critic of Russian President Vladimir Putin, has already set off a chorus of condemnations from leaders in Europe and the U.S.

Biden picks Warren allies to lead SEC, CFPB

Photo: Justin Sullivan/Getty Images

President-elect Joe Biden has selected FTC commissioner Rohit Chopra to be the next director of the Consumer Financial Protection Bureau (CFPB) and Obama-era Wall Street regulator Gary Gensler to lead the Securities and Exchange Commission (SEC).

Why it matters: Both picks are progressive allies of Sen. Elizabeth Warren (D-Mass.) and viewed as likely to take aggressive steps to regulate big business.

The perils of organizing underground

Illustration: Aïda Amer/Axios

Researchers see one bright spot as far-right extremists turn to private and encrypted online platforms: Friction.

Between the lines: For fringe organizers, those platforms may provide more security than open social networks, but they make it harder to recruit new members.

You’ve caught up. Now what?

Sign up for Mike Allen’s daily Axios AM and PM newsletters to get smarter, faster on the news that matters.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!