Stories

SaveSave story

The battle behind making machines more human-like

Creative Commons

Leading artificial intelligence researchers are debating whether machines — once they achieve super-human smarts — should be expected to explain themselves in robust discussion, or if spitting out an exceptional, if mysterious, answer is good enough.

The debate is part of the raging global discussion over a perceived challenge posed by machine intelligence to jobs, the human role in society, geopolitical power — to us.

Battle lines: Peter Norvig, head of research at AI powerhouse Google, has attracted attention with a June 22 speech in Sydney arguing that future intelligent computers shouldn't be expected to explain why they think what they think, since, after all, humans themselves don't do a very good job at explaining the true reasons for their decisions. Others argue that neural networks are simply a different way of thinking, and that even if we should be able to plumb the machine mind, there is no way for us to get at it.