AI pioneer Mustafa Suleyman: AI needs a "containment" plan
- Ryan Heath, author of Axios AI+

Photo illustration: Axios Visuals. Photo: Penguin Random House
DeepMind co-founder Mustafa Suleyman — now CEO of Inflection AI — is on a mission to convince Silicon Valley and Washington that powerful AI systems should be licensed by government to ensure the survival of humanity.
Why it matters: AI needs to be "contained," Suleyman argues, lest it slip free from human control.
The big picture: "Containment" was once the byword for an American foreign policy aimed at limiting the Soviet Union's expansion, and today we're most likely to hear the word in the context of nuclear power.
- Suleyman told Axios in an interview that he doesn't see tech as a Soviet-style adversary, but thinks retaining human control of AI will require the same kind of long-range master plan as the Cold War.
Our coming task is to divide power and decision-making authority between people and machines, Suleyman said. In a new book, he writes that will require "an overarching lock uniting cutting-edge engineering, ethical values, and government regulation."
- He cited today's "unbelievably safe" aircraft as an example of successful containment of technology via "an extremely strict licensing regime."
- The thinking behind recent tech export controls against China could also be applied domestically, to limit who develops the most powerful AI.
- Suleyman believes AI should be banned entirely in certain situations, like election campaigns.
The new wave of AI leaders are more responsible than the heads of "old social media companies and tech companies," Suleyman said.
- While not naming and shaming the likes of Elon Musk, Mark Zuckerberg and Tim Cook, it's clear Suleyman thinks he, Sam Altman and other peers are more ethical tech leaders.
- "We've genuinely tried to adopt the precautionary principle," he said. "We've all set up public benefit corporations to experiment with incentivization structures, governance structures." (Inflection AI is a public benefit corporation, and Altman's OpenAI is a non-profit with a for-profit subsidiary.)
- Suleyman has visited the White House twice since May with other AI leaders for AI safety discussions. He gives governments credit for "moving faster than they've really ever moved in response to a new technology."
Catch up quick: Suleyman co-founded DeepMind in 2010, then served as Google's vice president for AI products and policy after Google acquired DeepMind in 2014.
- The British entrepreneur, now 39, went on to co-found Inflection AI along with Reid Hoffman and Karén Simonyan. The new company promises "a kind and supportive companion" through its Pi assistant.
- Suleyman's new book, "The Coming Wave" — launched Tuesday night in New York by former Google CEO Eric Schmidt — is a plea to develop AI containment plans now, while there's still time.
State of play: "I don't think there's any real harm at this stage" of AI development, Suleyman said — noting that chatbots can say alarming and surprising things, but can't actually do much.
- Even so, Suleyman would "rather we act too early and slow down some innovation" than delay regulation. "When we're 100x larger than we are with today's frontier models, we'll be welcoming government in at that point," he said.
The big picture: The AI "wave" is different from earlier bouts of innovation, Suleyman writes. "The entirety of the human world depends on either living systems or our intelligence," he argues — but both of those are now subject to "exponential innovation and upheaval" for the first time.
- Suleyman said he thinks the proliferation of revolutionary new AI and synthetic biology is inevitable because their combined effects will be to compound intelligence in irresistible ways.
AI will help autocratic governments centralize power and "turbocharge surveillance," but they will also face many new risks from the forces of AI and synthetic biology, Suleyman said.
- "Wherever there is power, it will be amplified and accelerated, because it's cheaper to use that power," he says — but that also means "more surface area for an autocratic regime to be attacked on."
Human control of AI will remain possible because of the physical elements of AI systems, he said.
- "These technologies are grounded in chips and they operate on fiber optic cables. And even if an AI model is completely open source and online, the internet is quite a well-policed environment," he said.
- Yes, but: Other AI experts are split on the issue. Of 213 AI experts surveyed by Generation Lab for Axios, only 19% were confident humans will stay in control of AI.
Between the lines: Suleyman wants us to see the next generations of AI and synthetic biotech as fundamentally different from previous waves of innovation. But he also urges us to treat AI like a "regular invention instead of mystifying it in a sci-fi wet dream."