May 17, 2018 - Technology

The biggest ethical issues for AI in medicine

Tools on the wall of a doctor's office, including monitor to measure blood pressure

A doctor's examination room. Photo: Roberto Machado Noa/LightRocket via Getty Images

Before you get too excited about those artificial intelligence doctors we’ll all have someday, you should read this briefing note from the Nuffield Council on Bioethics, a London-based group that ponders the tough ethical questions about medicine.

What they're saying: The Council has a pretty handy guide to the things that can go wrong with AI. For example, it’s not always reliable. (In one clinical trial, an app incorrectly told doctors to send home patients with asthma.)

  • It can’t always explain its decisions, as Axios’ Ina Fried wrote about here.
  • It can be biased, if there are biases in the data used to train them.
  • Patients could get isolated if they’re dealing with AI all the time instead of people.
  • It will have to be super strict about data privacy and security.
  • It could be used for bad things, like surveillance.

The bottom line: It’s clearly meant to be a glass-half-empty look at AI, but the point is that we should all think it through a bit and not just embrace AI because it’s cool.

Go deeper