Another challenge for self-driving cars: Learning human body language
Researchers at the University of Michigan are studying human body language to teach self-driving cars to recognize and predict pedestrian movements with greater precision than current technologies.
Why it matters: People don't always pay attention when crossing the street, so AVs need to be on the lookout for distracted pedestrians, not just other cars on the road.
"If a pedestrian is playing with their phone, you know they're distracted. Their pose and where they're looking is telling you a lot about their level of attentiveness. It's also telling you a lot about what they're capable of doing next."— Ram Vasudevan, assistant professor of mechanical engineering, Michigan
How it works: Using data collected by vehicles through cameras, lidar and GPS, the researchers captured video snippets of humans in motion and then recreated them in 3D computer simulation.
- This enabled them to create a "biomechanically inspired recurrent neural network" that catalogs human movements.
- By focusing on humans' gait, body symmetry and foot placement, they can predict what pedestrians might do next and train self-driving cars to recognize behavior.
Background: Until now, most machine learning for AVs has relied on still images.
- If you show a computer enough photos of a stop sign it will eventually come to recognize stop signs in the real world.
What's next: By using video clips that run for several seconds, Michigan's system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.
- The researchers said they could predict a pedestrian's location within 10 centimeters after one second and less than 80 centimeters after 6 seconds. All other comparison methods were up to 7 meters off.
- "We're [now] better at figuring out where a person is going to be," says Matthew Johnson-Roberson, associate professor in Michigan's naval architecture and marine engineering department.