Nov 16, 2017 - Technology

AI searches for new inspiration

Illustration: Lazaro Gamio / Axios

Deep learning — the AI technique that allowed a computer to beat a world-champion Go player — has become very good at recognizing patterns in images and games. But it's loosely based on ideas we've had about the human brain for decades. Researchers now have more insights from neuroscience and better technologies, both of which they are trying to use to make more intelligent machines.

What's new: On Tuesday, DeepMind co-founder Demis Hassabis presented new work from the company that indicates a move into different territory. Researchers gave an AI system pictures of a 3D scene, along with the coordinates of the camera angles, and it was able to output a new scene from an angle it had never seen. Being able to build models of the world like this — and then use them to react and respond to new situations never encountered before — is considered key to intelligence.

The unpublished work was presented at the Society for Neuroscience's annual meeting in Washington, D.C. It's one example of different kinds of learning that researchers would like to develop in AI — and one based on aspects of human intelligence that computers haven't mastered yet.

The approach is among a few being tried but one that some researchers are excited about because, as Hassabis recently wrote, "[The human brain is] the only existing proof that such an intelligence is even possible."

"A lot of the machine learning people now are turning back to neuroscience and asking what have we learned about the brain over the last few decades, and how we can translate principles of neuroscience in the brain to make better algorithms," says Saket Navlakha, a computer scientist at the Salk Institute for Biological Sciences.

Last week, he and his colleagues published a paper suggesting that incorporating a strategy used by fruit flies to decide whether to avoid an odor it hasn't encountered before can improve a computer's searches for similar images.

Other goals:

  • One-shot learning. Children can learn a new word, task or concept from few examples. For some of the first deep learning algorithms, it required massive amounts of data. Progress has been made in reducing the amount of data needed, but it is still far more than what a two-year-old needs to learn.
  • Attention: In a crowded place, most of us are able to pay attention to what we need to know and filter out the rest. "Trying to include this idea in neural networks and machine learning is something people are paying more attention to," says Navlakha.
  • External memory: Brains have multiple systems for memory that operate at different time scales. Researchers want to see if they can give algorithms the equivalent of working memory or scratch pads. DeepMind combined external memory with deep learning to create an algorithm that can efficiently navigate the London Underground.
  • Intuitive physics. We recognize when something is physically off — an airplane balancing on its wing on a highway is clearly not right to us. But when a computer puts a caption to just that image, it reads "an airplane is parked on a tarmac at the airport." NYU's Brenden Lake says, "We don't know how the brain has those abilities."
  • Lifelong learning. Humans are built to constantly integrate new and perhaps sometimes conflicting information, resolve it and maybe even at times have to revise our entire understanding of something. "This constant change over time is something machine learning and AI has been struggling with," says Navlakha.

The big question for all AI approaches: What problem is a particular algorithm best suited to solve, and will it be better than other AI techniques? For neuroscience-inspired AI, there has been early progress but "the jury is still out," says Oren Etzioni, who heads the Allen Institute for Artificial Intelligence.

The big picture: It isn't about replicating the brain in a computer, but building a mathematical theory of learning, says Terrence Sejnowski who is also at the Salk Institute. "Eventually we will get to a point where theory in the machine learning world will illuminate neuroscience in a way unlike we've seen so far."

The back story: Deep learning algorithms only started to work in recent years as more data became available to train them and more processing power could be dedicated to them. In that sense, Sejnowski and others say what we've seen so far is really an "engineering achievement."

The field's pioneer, Geoffrey Hinton, recently said it needs new ideas.

The recent advances have reignited a bit of a debate among AI researchers about how best to actually do this. One way is to find principles of how the brain works and translate them into machine learning and other applications.

There's the "build it like the brain" approach — and to that end, efforts to map how neurons communicate with one another. And then there is the strategy of hard-wiring rules gleaned from models of how humans learn. MIT's Joshua Tenenbaum, Lake and their colleagues suggest the latter is needed to get beyond the accomplishments of pattern recognition. It's very likely advances will come from combining both.

"A more productive way to think about it is that there are some core things that infants, children, and adults use to learn new concepts and perform new tasks," says Lake. He suggests these principles of development and cognition should be seen as milestones and targets for machine learning algorithms to capture, however they get there.

Go deeper