New bidding war for AI's biggest brains
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Investors' latest billion-dollar bet on an AI startup is throwing cash at brainy researchers promising to kick AI into "super" gear.
Why it matters: The money flowing into AI, which already pays for the scarce high-end chips and rivers of electricity needed to train vast new models, is also enabling a handful of industry pioneers to write their own tickets.
Driving the news: Ilya Sutskever, an OpenAI co-founder who left the firm in May, has raised $1 billion to fund Safe Superintelligence (SSI), the startup announced last week.
- SSI will be "the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence," per a post on X by Sutskever and his co-founders: fellow OpenAI vet Daniel Levy and former Apple AI chief Daniel Gross.
SSI boasts that it has no product — or even a product plan.
- There'll be "no distraction by management overhead or product cycles," the post declares.
- Translation: These founders are saying, "Finally, we can do our research without being bothered by anyone else!" — including accountants, marketers or even customers.
Yes, but: The great tech research labs of the past — like Xerox PARC and Bell Labs — were funded by an overflow of corporate profits from real businesses.
- SSI is getting its money from venture capitalists (including Andreessen Horowitz, Sequoia and Ron Conway's SV Angel), who typically expect a return within five to 10 years.
Between the lines: Both OpenAI and Anthropic, spun up by OpenAI alumni, started out with manifestos like SSI's, aiming to make leaps in AI capability with a safety-first mindset.
- Today, both those companies are in the more mundane business of providing working generative AI tools to a broad consumer market.
Catch up quick: Sutskever, a renowned figure in AI circles, is a machine-learning pioneer who studied under Geoffrey Hinton, worked for Google Brain and co-founded OpenAI in 2015.
- Like his mentor Hinton, Sutskever has long been passionate about the need to build AI with protections against doomsday scenarios.
- As an OpenAI board member, Sutskever voted to fire CEO Sam Altman in November 2023, but then reversed gears and supported Altman's reinstatement.
- Sutskever left OpenAI in May and co-founded SSI in June of this year.
The big picture: All the big players in AI — including OpenAI, Google, Microsoft and Meta — are now committed to pursuing AGI, or artificial general intelligence, roughly meaning "AI that's as good as human beings."
- But none of these companies has been able to rigorously define this goal or provide a yardstick for assessing when it's been met — although OpenAI has reportedly developed some standards for internal use.
Our thought bubble: A billion dollars sounds like a fortune, but today's advanced AI research can burn through bucks fast.
- SSI's distraction-free, open-ended research lab is likely to find itself worried about burn rate and runway long before it invents a super-brain.
What we're watching: Sutskever's startup could quickly become a talent magnet and begin making research breakthroughs.
- But it's also possible that paradigm-shifting innovation will emerge from some team of not-yet-famous grad students laboring at a university or corporate campus anywhere on earth.
- That's how we got the idea that kicked off today's generative AI wave — a 2017 paper on transformer architecture in AI models.
The bottom line: SSI's "superintelligence" target is a bold move-the-goalposts play.
- Sutskever and company are saying AGI is too humble a goal: For them, it's uber-AI or bust.
- As for the "safe" part, right now it's just a vague aspiration.
