A sigh, a nod, a robot
Not only do we not always say what we mean, often we don't say anything at all. Which can be a terrific problem if you're thinking of hanging around service robots, or self-driving vehicles.
But at Carnegie-Mellon, a team led by Yaser Sheikh, a professor of robotics, has classified gestures across the human body. Using a dome containing 500 video cameras, they took account of every movement, down to the possibly tell-tale wiggle of your fingers.