Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Stay on top of the latest market trends
Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.
Sports news worthy of your time
Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.
Tech news worthy of your time
Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.
Get the inside stories
Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Want a daily digest of the top Denver news?
Get a daily digest of the most important stories affecting your hometown with Axios Denver
Want a daily digest of the top Des Moines news?
Get a daily digest of the most important stories affecting your hometown with Axios Des Moines
Want a daily digest of the top Twin Cities news?
Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities
Want a daily digest of the top Tampa Bay news?
Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay
Want a daily digest of the top Charlotte news?
Get a daily digest of the most important stories affecting your hometown with Axios Charlotte
People hanging off a streetcar. Photo: Lisa Larsen/The LIFE Picture Collection/Getty Images
The "trolley problem" sets up a quandary: whether to let a trolley stay the course and hit numerous people, or redirect it and hit just one person. Recently, researchers have designed similar thought experiments around AVs.
Why it matters: AVs are being taught to drive safely and avoid harm entirely, just as human drivers are. But media coverage of these experiments, which assume unrealistic expectations for AV technology and suggest that AVs really could face such choices, may be contributing to public distrust in AVs.
What’s happening: In a recent MIT study, thousands of participants were asked whether an AV should kill a driver or a pedestrian, a homeless man or an executive, and so on, sorting people into categories as specific as male athlete, female doctor, small child, or baby in a stroller.
- Participants' choices were assembled into a preference scale, ranking who is most preferable to spare or kill.
- This study and earlier research have been widely publicized as capturing essential ethical insights that should be built into AVs.
The National Science Foundation, meanwhile, funded a group of philosophers working on computer modeling of how AVs could respond to different scenarios, depending on their ethical coding.
Between the lines:
- There is no evidence that human drivers encounter instant decisions between two fatal outcomes with no alternative options. Programming AVs to anticipate such scenarios would not improve safety.
- "Driverless dilemmas" mischaracterize AV capabilities. It's unlikely an AV could detect personal details, let alone a person's profession. Instead, AVs are being taught to track everything around them, and swerve or slow down to avoid hitting anyone.
- Publicity of this research could be contributing to public distrust of AVs. It suggests that AVs will be unrealistically influenced by the ethics of their developers.
Yes, but: While AVs are not likely to face forced-choice ethical dilemmas, they may be taught to prioritize detecting and avoiding vulnerable road users, like pedestrians, over stationary objects, like parked cars.
- In that sense, ethical choices would factor into programming, but in a context that aligns with how people are taught to drive in order to avoid harm.
Sam Anthony is co-founder and CTO of Perceptive Automata. Julian De Freitas is a doctoral candidate in psychology at Harvard University.
Go deeper: Read the full paper responding to driverless dilemmas.