AI's global safety dance
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
AI safety experts from around the world are convening in San Francisco this week to compare notes on how to evaluate and mitigate the risks posed by artificial intelligence.
Why it matters: Governments are looking to gain from AI's potential benefits, but officials are also worried about potential risks.
- Anthropic CEO Dario Amodei assured the gathering that the two goals need not be seen in opposition, noting that investments in making AI systems safer often help advance the underlying technology itself.
- "We refer to this as a 'race to the top,'" Amodei said, pointing to work around making AI systems more explainable that has also helped make models more effective.
Yes, but: Amodei warned that the risks posed by AI are real, including catastrophic outcomes that may seem far-fetched today, especially as generative systems become more powerful and are given greater autonomy.
- "We need to start acting now to mitigate these risks," Amodei said.
The big picture: Politicians and leaders from tech companies joined regulators from more than a dozen countries for the inaugural convening of the International Network of AI Safety Institutes.
- Amodei was the most prominent tech executive to speak at the event, but many other tech giants were represented, including IBM, Apple and Salesforce.
- Much of the initial discussion on Wednesday focused on the broad approaches needed to ensure that AI is developed safely.
- Later technical sessions focused on the challenges posed by having to test and evaluate systems that even their creators don't fully understand, and in a field where the state of the art is rapidly changing.
What they're saying: On a panel of global AI leaders from Africa, Asia and Europe, a representative of Singapore's government voiced a sentiment that is common among many developing countries: the desire not to be left behind.
- "Even as we talk about the misuse of AI, we are also mindful of the missed use of AI, meaning the missed opportunities," said Hong Yuen Poon, a deputy secretary in Singapore's Ministry of Digital Development and Information. "Don't get me wrong. Safety is very important and we do take safety very seriously in Singapore."
- For his part, Poon suggested a pragmatic approach, trying out promising technology at a small scale before engaging in widespread use.
Between the lines: Barely mentioned in Wednesday's opening session — but weighing large on the minds of many — was how the re-election of Donald Trump will impact future U.S. efforts.
- Organizers from the State and Commerce departments took pains to stress the bipartisan nature of their work — including video testimonials from Congressional leaders in both parties.
- "We all share a desire to make sure that we are mitigating the potential risks to public safety and national security so we can harness the enormous potential of this breakthrough technology," Elizabeth Kelly, Director of the U.S. AI Safety Institute, told Axios. "It's why you've seen members of Congress on both sides of the aisle vote to fund the AI Safety Institute, co-sponsor and vote for legislation that would formally authorize us and, in fact, speak today."
- Commerce Secretary Gina Raimondo, the highest-ranking U.S. official attending the event, stressed that the issues around AI safety are bigger than politics. "It's frankly in no one's interest, anywhere in the world from any political party, for AI to be dangerous."
What's next: This gathering comes ahead of a summit for global heads of state and other leaders planned for February in Paris.
