
Creative Commons
Leading artificial intelligence researchers are debating whether machines — once they achieve super-human smarts — should be expected to explain themselves in robust discussion, or if spitting out an exceptional, if mysterious, answer is good enough.
The debate is part of the raging global discussion over a perceived challenge posed by machine intelligence to jobs, the human role in society, geopolitical power — to us.
Battle lines: Peter Norvig, head of research at AI powerhouse Google, has attracted attention with a June 22 speech in Sydney arguing that future intelligent computers shouldn't be expected to explain why they think what they think, since, after all, humans themselves don't do a very good job at explaining the true reasons for their decisions. Others argue that neural networks are simply a different way of thinking, and that even if we should be able to plumb the machine mind, there is no way for us to get at it.
But David Ferrucci, who ran IBM's Watson project for the 2011 Jeopardy challenge and now runs an AI company called Elemental Cognition, asserts that machines have to defend themselves. "I need an explanation. I wanted to be able to critique it," he tells Axios.
Why it matters: Machines that don't explain themselves scare people who worry about godlike computer overlords. But researchers like Ferrucci are posing a more fundamental question: Are we ourselves willing to be held accountable for decisions we base on those of a computer? Unless he knows the computer's rationale, Ferrucci doesn't want to gamble. And others agree: Darpa, the Pentagon's radical research lab, is studying what it calls "Explainable Artificial Intelligence," giving it the acronym XAI.
Let's talk about it: The debate played out on stage at the O'Reilly Artificial Intelligence Conference this week in New York. Ferrucci argued that the best way to get to the kind of AI he is thinking of — the kind you can have a normal exchange of opinions with — is to study and develop a computer that thinks like a human. "[But] there is a level to those dialogues that we are nowhere close to having," he said.
The future: Josh Tennenbaum, a professor at MIT, said the answer is to study young children and develop machine intelligence that learns and thinks like them. He played a video of an 18-month-old child who, confronted with an adult having trouble putting books into a closed cabinet, spontaneously opens its doors and watches to make sure the troubled man succeeds. "That's the heart of intelligence right there. And I think that's the grand challenge for AI," he said. "...If we could reverse engineer what's going on in that kid's mind, just think what we can do with robots and other machines that could really help us out."
The Amazon effect: At this stage commercially, many of us are using Alexa or technology like it at home, but Kris Hammond, chief scientist at Narrative Science, said, "Alexa understands words. It doesn't understand language." Alexa can hold only one idea at a time. If you ask a followup, it does not remember what was just said. "You need to be able to challenge," Hammond said. "Otherwise, we would only be listening."