Aug 8, 2019 - Technology

Looking to AI to understand how we learn

Illustration of a robot arm holding a brain.

Illustration: Aïda Amer/Axios

Two parallel quests to understand learning — in machines and in our own heads — are converging in a small group of scientists who think that artificial intelligence may hold an answer to the deep-rooted mystery of how our brains learn.

Why it matters: If machines and animals do learn in similar ways — still an open question among researchers — figuring out how could simultaneously help neuroscientists unravel the mechanics of knowledge or addiction, and help computer scientists build much more capable AI.

The big picture: For decades, researchers compared human and machine learning and largely rejected the notion that they are closely linked.

  • At the center of the question is the credit-assignment problem: the enigma of how the brain knows which parts of itself need to change in order to better accomplish a task.
  • In AI, a major method for credit assignment is known as error backpropagation.
  • After backprop fueled major advances in AI image recognition, some scientists started revisiting whether the brain could be doing something similar.

"There is a big undercurrent in neuroscience [saying] we should go back to neural networks," says Konrad Kording, a neuroscientist at UPenn, referring to a reigning AI technique that relies on backprop.

  • Backprop allows machines to learn from their mistakes. If an actual outcome differs from the computer's predicted outcome, information about what went wrong gets passed back through layers in the neural network, adjusting the system accordingly.
  • Noticing errors and spreading information about them are central to the brain, too.

What's happening: In a flurry of recent papers, researchers propose tweaking or approximating backprop to explain how the brain learns from mistakes.

  • One central debate is over whether neurons, which communicate through chemical signals, can simultaneously transmit information to another neuron while receiving feedback from that same neuron about what went wrong.
  • The researchers chasing this line of inquiry say there are biologically plausible ways neurons could do this to solve the credit assignment problem.
  • But so far, Kording cautions, "the experimental evidence for backprop is thin.”

A trio of scientists in Toronto and a DeepMind researcher are searching for that evidence in the brains of mice. In their experiment, carried out at the Allen Institute for Brain Science in Seattle, animals watch patterns on a screen as their brain activity is recorded.

  • The animals see a consistent pattern of moving shapes for hours — then, an aberration, like a square going the wrong way.
  • Preliminary results suggest there is in fact a specific, measurable signal that passes between neurons only when the animals witness an "error."
  • "We know the brain has to have some mechanism of credit assignment," says Joel Zylberberg, a professor at York University in Toronto. "The most promising candidate still seems to be these top-down feedback signals."

But, but, but: The brain doesn't just learn from error. Some of our knowledge is based on intuition and some is acquired throughout our lives.

  • "The general thing that I think is being missed by the field is that there's a huge disconnect between the diversity and complexity of the brain and the relative simplicity of the models people are using," says Gary Marcus, an NYU psychology professor and vocal critic of AI's dependence on deep learning.
  • "It remains to be seen whether people are shoehorning the technology they know right now" into their understanding of how the brain works, he says.

The bottom line: Researchers know learning hinges on the strengthening and weakening of the synapses between individual neurons. But how that change plays out globally among the roughly 100 trillion synapses in the human brain — so we can recognize someone's face, for example — is unknown.

  • "We're looking at the action of lots of synapses and that is the problem that backprop solves [in machines]," says neuroscientist Aaron Batista from the University of Pittsburgh.
  • "I want to be inspired by neural networks but I don’t want to take them too literally as if the only way to do this is to have the literal algorithm for backprop implemented in the brain."

Go deeper:

Go deeper