Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Stay on top of the latest market trends
Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.
Sports news worthy of your time
Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.
Tech news worthy of your time
Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.
Get the inside stories
Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Want a daily digest of the top Denver news?
Get a daily digest of the most important stories affecting your hometown with Axios Denver
Want a daily digest of the top Des Moines news?
Get a daily digest of the most important stories affecting your hometown with Axios Des Moines
Want a daily digest of the top Twin Cities news?
Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities
Want a daily digest of the top Tampa Bay news?
Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay
Want a daily digest of the top Charlotte news?
Get a daily digest of the most important stories affecting your hometown with Axios Charlotte
Carnegie-Mellon University
Not only do we not always say what we mean, often we don't say anything at all. Which can be a terrific problem if you're thinking of hanging around service robots, or self-driving vehicles.
But at Carnegie-Mellon, a team led by Yaser Sheikh, a professor of robotics, has classified gestures across the human body. Using a dome containing 500 video cameras, they took account of every movement, down to the possibly tell-tale wiggle of your fingers.
The objective: Sheikh's effort gets at a couple of realities going forward:
- If we are going to be living and working in proximity with robots, they are going to have to start understanding our non-verbal communications.
- And self-driving cars need a head's up as to our intentions while we are standing or walking down the street.
But Sheikh is not quite there yet: Hanbyul Joo, one of Sheikh's post-docs, tells Axios that while the gestures are all catalogued, other experts need to now step in and define what they mean, which is a big challenge since "even humans can't define their motions."