Feb 1, 2019 - Technology
Expert Voices

Why the "trolley problem" is the wrong way to think about AVs

People hanging off a streetcar.

People hanging off a streetcar. Photo: Lisa Larsen/The LIFE Picture Collection/Getty Images

The "trolley problem" sets up a quandary: whether to let a trolley stay the course and hit numerous people, or redirect it and hit just one person. Recently, researchers have designed similar thought experiments around AVs.

Why it matters: AVs are being taught to drive safely and avoid harm entirely, just as human drivers are. But media coverage of these experiments, which assume unrealistic expectations for AV technology and suggest that AVs really could face such choices, may be contributing to public distrust in AVs.

What’s happening: In a recent MIT study, thousands of participants were asked whether an AV should kill a driver or a pedestrian, a homeless man or an executive, and so on, sorting people into categories as specific as male athlete, female doctor, small child, or baby in a stroller.

  • Participants' choices were assembled into a preference scale, ranking who is most preferable to spare or kill.
  • This study and earlier research have been widely publicized as capturing essential ethical insights that should be built into AVs.

The National Science Foundation, meanwhile, funded a group of philosophers working on computer modeling of how AVs could respond to different scenarios, depending on their ethical coding.

Between the lines:

  • There is no evidence that human drivers encounter instant decisions between two fatal outcomes with no alternative options. Programming AVs to anticipate such scenarios would not improve safety.
  • "Driverless dilemmas" mischaracterize AV capabilities. It's unlikely an AV could detect personal details, let alone a person's profession. Instead, AVs are being taught to track everything around them, and swerve or slow down to avoid hitting anyone.
  • Publicity of this research could be contributing to public distrust of AVs. It suggests that AVs will be unrealistically influenced by the ethics of their developers.

Yes, but: While AVs are not likely to face forced-choice ethical dilemmas, they may be taught to prioritize detecting and avoiding vulnerable road users, like pedestrians, over stationary objects, like parked cars.

  • In that sense, ethical choices would factor into programming, but in a context that aligns with how people are taught to drive in order to avoid harm.

Sam Anthony is co-founder and CTO of Perceptive Automata. Julian De Freitas is a doctoral candidate in psychology at Harvard University.

Go deeper: Read the full paper responding to driverless dilemmas.

Go deeper