May 16, 2024 - Technology

Doomers have lost the AI fight

Illustration of the pyramids of Giza stylized as the tips of three cursors coming up from the desert.

Illustration: Aïda Amer/Axios

When Ilya Sutskever left OpenAI this week, the firm lost its last influential leader known to question CEO Sam Altman's push to deploy AI fast.

The big picture: OpenAI, founded as a nonprofit to pursue a responsible vision of advanced AI, now leads an industrywide charge to distribute generative AI worldwide, even though the technology remains error-prone and unpredictable.

Why it matters: Altman's belief that AI needs to be quickly shared and widely used now, so that society can benefit from it and also adapt to it, has been embraced by every one of his competitors. It's now the consensus reality of Silicon Valley.

State of play: The AI doomers warned us that AI might wipe out human civilization.

  • Their fear that a runaway advanced digital intelligence might escape human control and enslave or destroy our species was shared by many of the AI field's luminaries and some of tech's high-profile billionaires.

If you take this danger seriously, well, it could still happen!

  • But the doomer scenarios have looked increasingly far-fetched as the public has grown more accustomed to the limits and frustrations of today's available AI tools.
  • Elon Musk, one of OpenAI's co-founders and an early dabbler in doomerism, is now plowing billions into his own AI company.

The AI ethicists, meanwhile, warned us that AI would only reproduce human flaws at exponential scale.

  • Their arguments underscored AI's propensity to fool people and spread lies, its potential to become a tool for discrimination and its likelihood of transforming humankind's biases into entrenched systems.
  • These dangers have grown, not shrunk, in the 18 months since ChatGPT took off — but the ethical critics, who once labored in the trenches with the engineers at leading AI firms, have largely been sidelined or departed in protest.

Now, it's accelerationists — people who argue AI's benefits will be so overpoweringly vast that slowing down the technology would be a crime — who are calling the shots. They control much of the industry's money, and they have taken its wheel.

Catch up quick: Altman's hold over OpenAI's direction was briefly challenged during the company's epic boardroom showdown last autumn.

  • In that episode, Sutskever, then a board member, first sided with a board majority that voted to fire Altman.
  • Then he switched gears and joined the vast majority of the company's employees in seeking Altman's reinstatement.

Altman has always talked about responsibility and caution, but under his direction, OpenAI continues to floor the pedal.

  • He can still sound like a doomer or an ethicist at times — but he acts like an accelerationist.

Case in point: This week's OpenAI demo of a perky, chatty voice assistant wowed viewers with its speed, versatility and colloquial conversation.

  • The demo also set off loud alarms for more critical observers, who saw a range of dangers down this road of anthropomorphic impersonation.
  • For such an assistant to fulfill its potential, users will have to entrust their work and personal lives to it.
  • But OpenAI has plainly designed it to charm users as well as answer their questions — opening the door to a range of misuses.

The tech industry's reputation for responsible custody of users' data, attention and interests has already taken hits over the past decade.

  • If you think the social media platforms made a mess by hooking users with attention-harvesting, ad-boosting engagement techniques, think about what could happen if AI personas we bond with start hawking products and promoting candidates.

What we're watching: As the industry sloughs off the doomers and the ethicists, both groups still have some footholds in Washington.

  • President Biden's executive order on AI is starting to have some impact, and Congress is just beginning to talk about new legislation.

Yes, but: The industry has already chosen a path of "develop first, ask questions later."

The bottom line: AI's foundations have been laid, and the framing and joists are going up fast, while the government is still trying to pass a building code.

Go deeper: Behind the Curtain — AI's doom or boom

Go deeper