Illustration: Aïda Amer/Axios

Two parallel quests to understand learning — in machines and in our own heads — are converging in a small group of scientists who think that artificial intelligence may hold an answer to the deep-rooted mystery of how our brains learn.

Why it matters: If machines and animals do learn in similar ways — still an open question among researchers — figuring out how could simultaneously help neuroscientists unravel the mechanics of knowledge or addiction, and help computer scientists build much more capable AI.

The big picture: For decades, researchers compared human and machine learning and largely rejected the notion that they are closely linked.

  • At the center of the question is the credit-assignment problem: the enigma of how the brain knows which parts of itself need to change in order to better accomplish a task.
  • In AI, a major method for credit assignment is known as error backpropagation.
  • After backprop fueled major advances in AI image recognition, some scientists started revisiting whether the brain could be doing something similar.

"There is a big undercurrent in neuroscience [saying] we should go back to neural networks," says Konrad Kording, a neuroscientist at UPenn, referring to a reigning AI technique that relies on backprop.

  • Backprop allows machines to learn from their mistakes. If an actual outcome differs from the computer's predicted outcome, information about what went wrong gets passed back through layers in the neural network, adjusting the system accordingly.
  • Noticing errors and spreading information about them are central to the brain, too.

What's happening: In a flurry of recent papers, researchers propose tweaking or approximating backprop to explain how the brain learns from mistakes.

  • One central debate is over whether neurons, which communicate through chemical signals, can simultaneously transmit information to another neuron while receiving feedback from that same neuron about what went wrong.
  • The researchers chasing this line of inquiry say there are biologically plausible ways neurons could do this to solve the credit assignment problem.
  • But so far, Kording cautions, "the experimental evidence for backprop is thin.”

A trio of scientists in Toronto and a DeepMind researcher are searching for that evidence in the brains of mice. In their experiment, carried out at the Allen Institute for Brain Science in Seattle, animals watch patterns on a screen as their brain activity is recorded.

  • The animals see a consistent pattern of moving shapes for hours — then, an aberration, like a square going the wrong way.
  • Preliminary results suggest there is in fact a specific, measurable signal that passes between neurons only when the animals witness an "error."
  • "We know the brain has to have some mechanism of credit assignment," says Joel Zylberberg, a professor at York University in Toronto. "The most promising candidate still seems to be these top-down feedback signals."

But, but, but: The brain doesn't just learn from error. Some of our knowledge is based on intuition and some is acquired throughout our lives.

  • "The general thing that I think is being missed by the field is that there's a huge disconnect between the diversity and complexity of the brain and the relative simplicity of the models people are using," says Gary Marcus, an NYU psychology professor and vocal critic of AI's dependence on deep learning.
  • "It remains to be seen whether people are shoehorning the technology they know right now" into their understanding of how the brain works, he says.

The bottom line: Researchers know learning hinges on the strengthening and weakening of the synapses between individual neurons. But how that change plays out globally among the roughly 100 trillion synapses in the human brain — so we can recognize someone's face, for example — is unknown.

  • "We're looking at the action of lots of synapses and that is the problem that backprop solves [in machines]," says neuroscientist Aaron Batista from the University of Pittsburgh.
  • "I want to be inspired by neural networks but I don’t want to take them too literally as if the only way to do this is to have the literal algorithm for backprop implemented in the brain."

Go deeper:

Go deeper

Trump refuses to answer question on whether he supports QAnon conspiracy theory

President Trump on Friday refused to answer a direct question on whether or not he supports the QAnon conspiracy theory during a press briefing.

Why it matters: Trump congratulated Georgia Republican Marjorie Taylor Greene, who vocally supports the conspiracy theory, on her victory in a House primary runoff earlier this week — illustrating how the once-fringe conspiracy theory has gained ground within his party.

Postal workers' union endorses Biden

Photo: Drew Angerer/Getty Images

The National Association of Letter Carriers, the union representing roughly 300,000 current and former postal workers, on Friday endorsed Joe Biden in the 2020 presidential election, calling him "a fierce ally and defender of the U.S. Postal Service," reports NBC News.

Why it matters: The endorsement comes as President Trump has vowed to block additional funding for the USPS in the next coronavirus stimulus package, linking it to his continued baseless claims that increased mail-in voting will lead to widespread voter fraud.

Lawmakers demand answers from World Bank on Xinjiang loan

Illustration: Sarah Grillo/Axios

U.S. lawmakers are demanding answers from the World Bank about its continued operation of a $50 million loan program in Xinjiang, following Axios reporting on the loans.

Why it matters: The Chinese government is currently waging a campaign of cultural and demographic genocide against ethnic minorities in Xinjiang, in northwest China. The lawmakers contend that the recipients of the loans may be complicit in that repression.