The biggest difficulty in self-driving cars is not batteries, fearful drivers, or expensive sensors, but what's known as the "trolley problem." It is a debate over who is to die and who should be saved if an autonomously driven vehicle ends up with such a horrible choice on the road. And short of that, how will robotic vehicles navigate the countless other ethical decisions, small and large, executed by drivers as a matter of course?
In a paper, researchers at Carnegie Mellon and MIT propose a model that uses artificial intelligence and crowd sourcing to automate ethical decisions in self-driving cars. "In an emergency, how do you prioritize?" study author Ariel Procaccia, a professor at Carnegie Mellon, tells Axios.
The bottom line: The CMU-MIT model is only a prototype at this stage. But it or something like it will have to be mastered if fully autonomous cars are to become a reality.
How they created the system: Procaccia's team used a model at MIT called the Moral Machine, in which 1.3 million people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter and then aggregated them.
This created a "distribution of societal preferences" — in effect the rules of ethical behavior in a car. The researchers could now ask the system any driving question that came to mind, since it "knew" the ethical way to decide; it was as though they were asking the original 1.3 million participants to vote again.
A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said.
Read the rest of the post.