Microsoft launches "Humanist Superintelligence" push
Add Axios as your preferred source to
see more of our stories on Google.

Mustafa Suleyman. Photo: Microsoft
Microsoft is launching its own effort toward superintelligence. AI chief Mustafa Suleyman told Axios the company plans to build safer, more human-centered frontier models.
Why it matters: The move follows Microsoft's renegotiated deal with OpenAI and signals the company's intent to catch up in an expensive and crowded race to build artificial general intelligence.
Zoom in: Suleyman will lead the Microsoft AI (MAI) Superintelligence team.
- Karén Simonyan, Microsoft AI chief scientist, and other core MAI leaders and researchers will shift their focus to what Suleyman called "Humanist Superintelligence."
- "We're definitely expanding the team and looking at folks with more fundamental research capabilities," Suleyman told Axios.
Zoom out: AGI and superintelligence both refer broadly to AI systems that can equal or surpass human intelligence across a broad set of disciplines.
Driving the news: Suleyman detailed the effort in an interview with Axios and a blog post Thursday, calling for Microsoft to build a highly powerful AI that's focused on serving humanity, as opposed to maximizing performance or other goals.
- The software giant had been contractually prevented from pursuing artificial general intelligence under its prior deal with OpenAI, though the company has been developing smaller language models and algorithms for image and voice.
What they're saying: "The project of superintelligence has to be about designing an AI which is subservient to humans, and one that keeps humans at the top of the food chain," Suleyman told Axios, noting that it's remarkable that such a statement even needs to be made.
- "Humans matter more than AI," he writes in his blog.
Yes, but: Suleyman rejects the narrative of the AI "race" to AGI.
- He says results from the new Superintelligence Lab will take time and should be seen as "a wider and deeply human endeavor to improve our lives and future prospects."
- "I think it's still going to be a good year or two before the superintelligence team is producing frontier models," he said in the interview.
The big picture: OpenAI, Anthropic, Google, Meta and Ilya Sutskever's Safe Superintelligence are all pursuing similar ambitions, as are a variety of institutions in China.
- Wednesday, Nvidia CEO Jensen Huang said China is on track to win the AI race.
Between the lines: Microsoft's focus on safety and human-centricity comes as the regulatory environment moves away from a focus on those areas. A key risk is that Microsoft's approach could prove costlier or less efficient than those developed with fewer safeguards.
- "It's a difficult one to manage," Suleyman acknowledged. One example is deciding when and how to allow AI systems to learn on their own, a technique that brings both risk and opportunity for rapid advancement.
- "The reality is that performance gains will come from recursive self-improvement, and we are also pursuing RSI, and we should."
- Suleyman, who has been focused on the role of humanity in AI since his time running the startup Inflection AI, says he doesn't see himself or his Microsoft colleagues as skeptics or doomers.
- "We are accelerationists," he said. "We are great believers in the power of science and technology to make the world a better place. And we want it to go as fast as possible, but it can't be subject to no constraints."
