Nov 3, 2023 - Technology

Global leaders commit to pre-deployment AI safety testing

Tolga Akmen/EPA/Bloomberg via Getty Images

The U.K. wrapped up its first global AI summit with an agreement among 27 governments  —  including those in the European Union  —  to safety test models before and after they're deployed.

Why it matters: The summit marks a win for Elon Musk, Sam Altman and others who have sounded alarms about the existential threats of AI, even as they're building some of the most notable AI products.

The British government succeeded in bringing the U.S., China, India, Japan and EU countries together to create a baseline for future action and to safety- test AI models at the London-based AI Safety Institute, which aims to "act as a global hub on AI safety," according to U.K. Prime Minister Rishi Sunak.

  • The countries agreed to work together to help prevent "serious, even catastrophic, harm, either deliberate or unintentional," from the most powerful AI models, according to a policy paper released by the U.K. government.
  • OpenAI, Google DeepMind, Microsoft, Meta and other AI companies also signed on to the non-binding agreement.

Between the lines: "Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn't rely on them to mark their own homework," Sunak said in a statement at the end of the summit.

Yes, but: While the summit had a surprising unity of purpose given the range of countries participating, it's not clear which AI models would be subject to testing, or when.

  • The Biden administration's executive order contains even more concrete actions — a win for those who argue it's more critical to focus on immediate AI risks.

The other side: Executives at medium-sized, publicly-listed tech companies expressed frustration that there was no way for them to participate in the summit — since invites were reserved for companies with the largest AI models.

  • Civil society groups that managed to score an invite said in a statement that they called on governments to "prioritize regulation that addresses well established harms that impact people's rights and daily lives now." That sentiment was also echoed by trade unions in a separate open letter.

What they're saying: Sunak "deserves enormous credit for galvanizing the international community," said Chris Meserole, executive director of the Frontier Model Forum, who now wants to see "clear standards and best practices for frontier AI safety."

  • AI Now Institute executive director Amba Kak, one of three civil society representatives at the summit table, praised pre-deployment testing commitments, but warned, "We are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions."
  • The Silicon Valley Leadership Group, a new body chaired by Google and Johnson & Johnson, issued its own set of five principles for human-centered and responsible AI.

What's next: More global meetings are planned for South Korea in April and France next November.

Editor's note: This story has been updated with additional detail about the companies participating in the agreement.

Go deeper