Oct 28, 2021 - Science

AI hints at how the brain processes language

Robot reading book

Illustration: Sarah Grillo/Axios

Predicting the next word someone might say — like AI algorithms now do when you search the internet or text a friend — may be a key part of the human brain's ability to process language, new research suggests.

Why it matters: How the brain makes sense of language is a long-standing question in neuroscience. The new study demonstrates how AI algorithms that aren't designed to mimic the brain can help to understand it.

  • "No one has been able to make the full pipeline from word input to neural mechanism to behavioral output," says Martin Schrimpf, a Ph.D. student at MIT and an author of the new paper published this week in PNAS.

What they did: The researchers compared 43 machine-learning language models, including OpenAI's GPT-2 model that is optimized to predict the next words in a text, to data from brain scans of how neurons respond when someone reads or hears language.

  • They gave each model words and measured the response of nodes in the artificial neural networks that, like the brain's neurons, transmit information.
  • Those responses were then compared to the activity of neurons — measured with functional magnetic resonance (fMRI) or electrocorticography — when people performed different language tasks.

What they found: The activity of nodes in the AI models that are best at next-word prediction was similar to the patterns of neurons in the human brain.

  • These models were also better at predicting how long it took someone to read a text — a behavioral response.
  • Models that exceled at other language tasks — like filling in a blank word in a sentence — didn't predict the brain responses as well.

Yes, but: It's not direct evidence of how the brain processes language, says Evelina Fedorenko, a professor of cognitive neuroscience at MIT and an author of the study. "But it is a very suggestive source of evidence and much more powerful than anything we’ve had."

  • The finding may not be enough to explain how humans extract meaning from language, Stanford psychologist Noah Goodman told Scientific American, though he agreed with Fedorenko that the method is a big advance for the field.

The intrigue: There was one robust but puzzling finding, Schrimpf says.

  • AI models can be trained on massive amounts of text or they can be untrained.
  • Schrimpf said he expected the untrained models to give poor predictions of the brain responses, but they found the models are decent at it.
  • It could be there is an inherent structure that pushes untrained models in the right direction, he says. Humans are similar — our untrained brains are "a good start state from which you can get something without optimizing to real world experiences."

The bottom line: A common criticism of research comparing AI and neuroscience is that both are black boxes, Fedorenko says. "This is outdated. There are new tools for probing."

Go deeper:

Go deeper