The post-U.K. AI summit world

- Ashley Gold, author ofAxios Pro: Tech Policy

Vice President Kamala Harris greets U.K. Prime Minister Rishi Sunak on Day Two of the AI Safety Summit 2023 at Bletchley Park, England, on Thursday. Photo: Tolga Akmen/EPA/Bloomberg via Getty Images
World leaders are making it clear they want to move ahead in unison on ensuring AI is both safe and useful to people, even as they unveil competing measures to do so.
Driving the news: Vice President Kamala Harris visited the United Kingdom last week to talk about safety and artificial intelligence alongside leaders from the U.K., the EU, Japan, India and China. That same week, the Biden administration unveiled a sweeping executive order on AI.
- Participants, including companies like OpenAI, DeepMind, Microsoft and Meta, signed an international declaration recognizing the need to address AI development risks.
- The same week, the G7 agreed on a code of conduct for companies developing advanced AI systems.
Why it matters: AI has brought world governments together with a goal toward unity in regulation and governance more quickly and easily than past debates over how technology should be controlled.
What they're saying: "I think the momentum is really palpable," Max Tegmark, a professor of physics at MIT who has been vocal about ensuring AI is controlled and safe, told Axios while attending the UK summit.
- "Instead of having a race to the bottom, with companies undercutting each other on safety, we're starting to see this race to the top," he said, with country leaders attending the summit announcing AI safety initiatives one after another.
- "It's just day and night compared to where we were a year ago. This is mirroring the way we succeeded with regulating medicines."
Zoom in: "Of course everyone wants to show their own audience they are on top of this," Morten Løkkegaard, a Danish member of the European Parliament who visited Washington last week for meetings with federal agencies and members of Congress on tech, told Axios.
- "It's not a coincidence everyone is running around saying look what we can do. But it's very positive, and we should have been doing that for years now."
- "Since generative AI introduced itself via these chatbots, politicians from all around the world are experiencing a certain pressure from the public and voting community."
The intrigue: "It was really moving to see the U.S. and China standing side by side on the stage.… They're shaking hands and talking about how they will both want to work to put safety standards in place," Tegmark said.
- "There was generally more unity than I expected.… None of the usual squabbling about whether you should focus on immediate term harms or existential harms, because [from the opening], both were important."
Yes, but: It's easy to say world leaders need to agree on how governments should handle AI with photographers snapping and people listening around the globe.
- It's a lot harder to figure out what that looks like in practice. Government structures around the globe are wildly different, along with individual approaches to surveillance, the free market and business development, privacy and civil rights.
Quick take: "It's good to have a global conversation on the importance of AI," Dana Rao, executive vice president at Adobe, told Ashley. "But in terms of where I think substantive value is going to be given to the companies, it's going to be in these details.… That's how we're going to be able to move forward.
- "If we don't even know whether we're subject to a law or not because we don't understand the criteria of what a foundation model is, then we're sort of stuck at the starting gate."
What to watch: Countries will have to ensure adequate funding and personnel to carry out and enforce any proposed AI safety rules.
- That also varies by country. The U.S. generally prefers a sector-by-sector approach to regulation, while the EU has government agencies on both a bloc and country level.
The bottom line: "It's pretty easy to stay top-level and say safety is important and people should not be able to create nuclear bombs from the comfort of their living room," Rao said. "That part's easy, and everything else is hard."
- Løkkegaard said he hopes some sort of body like the U.S.-EU Trade and Technology Council will be formed specifically for AI to keep conversations aligned.