Updated Oct 14, 2018 - Technology

Will AI make us dumb?

A pixelated illustration of a brain blurs vibrates in place.

Illustration: Rebecca Zisser/Axios

Since the Enlightenment, humans have made unprecedented advances — in science, technology, health conditions, living standards and more — as reason and analysis replaced superstition. Now, technology may be threatening that system.

Driving the news: In a much-read essay in The Atlantic, former Secretary of State Henry Kissinger argues that powerful artificial intelligence could replace human thought with data-driven decision-making. If that happens, AI could chip away at our ability to think critically.

AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.
— Henry Kissinger, in his June Atlantic essay

The context: Kissinger is referring to artificial general intelligence, a future form of AI that would be capable of human-like thought in a variety of fields. That’s very different from today's AI: algorithms that perform narrow tasks like identifying images and operating self-driving cars.

The big picture: I spoke with a half-dozen people from different fields about the essay. Some found it hard to fathom given how far AI is from general intelligence. Others, however, agreed with Kissinger's central thesis. Here is a sampling.

  • "I worry that human abilities may atrophy," says Daniel Weld, a professor at the University of Washington who studies human-computer interaction.
  • "My gut instinct is that we’ll get dumber in some ways — on the principle of muscle atrophy — even as we process vastly more information," says Darrin McMahon, a history professor at Dartmouth College who has written books about the Enlightenment.
  • McMahon told me to read the final paragraph of Michel Foucault’s 1966 book, "The Order of Things." In it, the philosopher imagines an ebbing of the ideas that defined humanity for centuries:
If some event of which we can at the moment do no more than sense the possibility — without knowing either what its form will be or what it promises — were to cause them to crumble … then one can certainly wager that man would be erased, like a face drawn in sand at the edge of the sea.

For now, AI is nowhere near rendering humans brains mush.

  • Today’s AI can’t yet approximate the abilities of even a 2-year-old child.
  • Andrew Ng, who founded Google Brain and launched Baidu's AI lab, tells Axios that AI at this stage is simply a tool, like Google or a calculator.

Whether AI does eventually approach or surpass the human capacity for broad thinking, there’s an alternate future in which machines don’t take over complex decisions completely, but instead supply humans with relevant advice and data.

  • Basic forms of these "centaurs" — so named because they're half human and half other being — already exist, as we’ve reported. Weld, for his part, considers this future more likely than Kissinger's.

The bottom line: Even in its current form, AI is well-suited for the Enlightenment-style thinking that Kissinger worries humans will abdicate.

  • In McMahon’s characterization, the Enlightenment elevated a type of analysis called "instrumental reason," a profit-loss calculus in which computers excel.
  • What remains is to set long-term goals. "This can tell us how to live. But something else has to tell us what to live for," McMahon said.

Kissinger warns of AI "ungoverned by ethical or philosophical norms." These would presumably set unethical goals, perhaps harming humans as a result.

  • This is a concern whenever autonomous systems are at work, says Stuart Shieber, a Harvard computer science professor.
  • Shieber has a simple solution: don’t rely on these systems blindly. "We don’t give up our moral responsibility just because autonomous systems exist," he said.
  • "Let's design them such that either they improve on our own admittedly fallible moral behaviors, or at least they're no worse, or don't rely on them without some sort of human intervention," said Shieber. "It’s up to us to do that."
Go deeper