May 17, 2019 - Technology

Self-driving cars need to be better mind readers

Illustration of a Rorschach test in the shape of a car

Illustration: Sarah Grillo/Axios

Self-driving cars can be programmed to stay in their lane and obey speed limits. What they lack is human intuition — the ability to know what's going on inside someone else's head.

Why it matters: Autonomous vehicles are getting closer to reality but if they're ever going to drive better than people they need to learn how to share the road with other cars, pedestrians and cyclists. Turning them into social creatures with human instincts is arguably the biggest roadblock to autonomy.

Human drivers react to social cues all the time — and not just a wave of the hand or a nod from a pedestrian signaling for them to go ahead. People give off hundreds of signals that others use to understand  their state of mind and respond accordingly.

  • A person standing in a crosswalk looking at their phone is probably distracted, for example, so the driver knows to yield.
  • A group of kids playing around on the sidewalk could tumble into the street, so caution is advised.
  • A motorcyclist at an intersection touches their feet to the ground, so it's probably safe to proceed.

Be smart: AVs need the same instincts so they can recognize, understand, and predict human behavior. But that ability, which comes so naturally to humans, is hard for machines to learn.

Driving the news: Perceptive Automata, a startup whose backers include Toyota and Hyundai, has a unique approach — it's applying behavioral science techniques to machine learning.

  • Instead of training AI models by labeling objects (this is a tree; this is not a tree, for example), Perceptive Automata characterizes the way people understand others' state of mind and then trains its algorithms to recognize and predict human behavior.

How it works:

  • Researchers collect sensor data from vehicles interacting with pedestrians, bicyclists, and other motorists and then chop up the images into smaller slices.
  • Then they blur or cover a portion of each slice and ask groups of people what the depicted person is about to do.
  • They repeat the process hundreds of thousands of times, with a variety of interactions, and use the results to train models that interpret the world the way people do.

Humans don't always agree, of course — predicting human behavior is not as easy as labeling a tree — but capturing that ambiguity is important to program how an AV should behave, CEO Sid Misra says.

Yes, but: That also means self-driving cars could be prone to mistakes, notes Elizabeth Walshe, a cognitive neuroscientist who studies driving behavior at the Children's Hospital of Philadelphia and the University of Pennsylvania.

  • "Humans are not perfect. Are we happy with machines that make mistakes and are not perfect?"

The bottom line: It will take time for people to learn to trust self-driving cars, but understanding what they might be thinking is a good place to start.

Go deeper: Cars of the future need to be able to heal themselves

Go deeper