
Carnegie-Mellon University
Not only do we not always say what we mean, often we don't say anything at all. Which can be a terrific problem if you're thinking of hanging around service robots, or self-driving vehicles.
But at Carnegie-Mellon, a team led by Yaser Sheikh, a professor of robotics, has classified gestures across the human body. Using a dome containing 500 video cameras, they took account of every movement, down to the possibly tell-tale wiggle of your fingers.
The objective: Sheikh's effort gets at a couple of realities going forward:
- If we are going to be living and working in proximity with robots, they are going to have to start understanding our non-verbal communications.
- And self-driving cars need a head's up as to our intentions while we are standing or walking down the street.
But Sheikh is not quite there yet: Hanbyul Joo, one of Sheikh's post-docs, tells Axios that while the gestures are all catalogued, other experts need to now step in and define what they mean, which is a big challenge since "even humans can't define their motions."