Axios interview: Google's Hassabis warns of AI race's hazards
Add Axios as your preferred source to
see more of our stories on Google.

Photo illustration: Axios Visuals. Photo: Jonathan Nackstrand/AFP via Getty Images
The more artificial intelligence becomes a race, the harder it is to keep the powerful new technology from becoming unsafe, Google DeepMind CEO Demis Hassabis told Axios in a wide-ranging interview at this week's AI Action Summit in Paris.
Reality check: Rules to control AI only work when most nations agree to them, Hassabis said, and that's only getting harder — as made clear by the summit's inconclusive outcome.
"It seems to be very difficult for the world to do," said Hassabis — who won a Nobel Prize last year and now leads Google's AI work. "Just look at climate. There seems to be less cooperation. That doesn't bode well."
- Indeed, at the Paris Summit, the U.S. and U.K. refused to sign on to a communique on AI safety that had already been criticized for lacking enforceable commitments.
The big picture: Hassabis said the need for norms and rules grows as the world gets closer to so-called artificial general intelligence (AGI), meaning advanced AI systems that can do a broad range of tasks faster and better than humans.
- "But it has to be international," Hassabis said. "Otherwise you'll get nations competing and other things like that."
Hassabis doesn't have a specific recipe for creating that international cooperation, but he said it will need to involve governments, companies, academics and civil society.
- "It is too important for it only to be one set of people working on this," he said. "It's going to require everybody to come together — hopefully, in time."
Between the lines: Hassabis also stressed the need for a diverse collection of people to be involved in the development of AI — even as companies, including Google, move away from their programs to diversify their workforces, which remain highly white and male.
- "Research advances are better with a big diversity of thinking in your team," he said. "That's kind of well-proven in science and in research."
- Having a diverse set of voices in the room when it comes to deploying technologies is even more critical, he said, "because that's when it affects people's lives."
- "I think that's where you know you want the people that are being affected to have a say as to how those technologies get deployed," he said.
Open source AI has become linked in the public mind with both Meta and China, but Hassabis said Google is a "huge proponent of open science and open source."
- "We've open sourced many, many, many things in the past and obviously published almost all of our innovations, including transformers and AlphaGo, and all of the things that the modern industry is built on," he said. "Clearly that makes progress go faster."
- But he warned that the spread of open source AI only sharpens the technology's root ethical dilemma: "How do you stop bad actors repurposing general purpose technology for harmful ends?"
- "Powerful agentic systems are going to be built, because they'll be more useful, economically more useful, scientifically more useful. ... But then those systems become even more powerful in the wrong hands, too."
The bottom line: Hassabis said it's going to take effort to manage societal change even if the tech industry does manage to develop AI safely.
- "I think there needs to be more time spent by economists, probably, and philosophers and social scientists on what do we want the world to be like, even if we get everything right, post AGI," he said. "I'm surprised there's not very much discussion about that, given the relatively short timelines."
