Nov 28, 2023 - Technology

Former Google CEO: Companies' AI guardrails "aren't enough" to prevent harm

Eric Schmidt at Axios' AI+ Summit. Photo: Eric Lee/Axios

Guardrails AI companies add to their products to prevent them from causing harm "aren't enough" to control AI capabilities that could endanger humanity within five to ten years, former Google CEO Eric Schmidt told Axios' Mike Allen on Tuesday.

The big picture: Interviewed at Axios' AI+ Summit in Washington, D.C., Schmidt compared the development of AI to the introduction of nuclear weapons at the end of the Second World War.

  • "After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that. We don't have that kind of time today," Schmidt said.
  • The danger, he said, arrives at "the point at which the computer can start to make its own decisions to do things" — when, say, such a system discovers access to weapons, and we can't be certain the system will tell us the truth.
  • Two years ago, that moment was expected to be 20 years off. Today, Schmidt said, some experts think it's only two to four years away.

What's next: Schmidt argued that the best solution is to create a global body akin to the Intergovernmental Panel on Climate Change (IPCC) to "feed accurate information to policymakers" so that they understand the urgency and can take action.

Of note: Schmidt said he's optimistic that AI will offer wide benefit to vast human populations: "I defy you to argue that an AI doctor or an AI tutor is a negative. It's got to be good for the world."

On the OpenAI boardroom drama: "It's pretty simple. The board fires Sam [Altman]. Sam fires the board." Once the OpenAI staff showed its loyalty to Altman with a mass open letter, Schmidt said, the outcome was inevitable: "How much more feedback do you need from your 360?"

Go deeper: 3 reasons why the OpenAI meltdown matters

Go deeper