The death of AI idealism
Add Axios as your preferred source to
see more of our stories on Google.

Elon Musk arrives to court at the Ronald V. Dellums Federal Building in Oakland, California. Photo: Benjamin Fanjoy via Getty Images
Elon Musk's lawsuit against OpenAI and the recent spate of deals AI companies have cut with the Pentagon show how far the industry has drifted from the altruistic origin story it's long told about itself.
Why it matters: OpenAI and Anthropic were founded on the idea that AI would be deployed in ways that prioritized safety and the public good. Now those principles are giving way to an arms race for market share, as those companies and others release ever more powerful models.
The big picture: The men behind today's biggest AI labs often pitched themselves as a safer, less-greedy alternative to earlier tech leaders.
- Acknowledging the breathtaking power of AI, they first rejected Silicon Valley's "move fast and break things" ethos.
- Now, AI behemoths are locked in an escalating competition for enterprise, consumer and government business.
- When the Pentagon blacklisted Anthropic because it wanted to restrict how its AI could be used — including for mass surveillance and fully autonomous weapons —rivals swooped in and agreed to the "all lawful use" terms Anthropic had rejected.
- Meanwhile, just last week the Pentagon reached an agreement allowing Google's Gemini models to be used for "any lawful government purpose," Axios' Maria Curi confirmed.
Flashback: Sam Altman and Musk co-founded OpenAI in large part out of a desire to develop artificial general intelligence before Google and its AI chief, Demis Hassabis.
- Musk was obsessed with the idea of Hassabis and his corporate bosses dominating the world's most powerful technology.
- Hassabis, for his part, was focused more on AI's potential to cure diseases and power new scientific discoveries.
Zoom in: Musk's court case centers on his argument that Altman and OpenAI president Greg Brockman should not be trusted with a for-profit AI company.
- One big problem: Musk runs xAI, his own for-profit OpenAI rival. His argument asks jurors to distrust OpenAI's profit motive while overlooking his own.
- "I suspect that there are a number of people who do not want to put the future of humanity in Mr. Musk's hands," U.S. District Judge Yvonne Gonzalez Rogers told the trial's lawyers.
The case also hinges on the belief that AI is, in fact, a danger to humanity.
- Musk used his first two days of testimony in Oakland to repeat his fears that AI could kill us all.
- On his third day, Judge Gonzalez Rogers cut off that line of argument, warning that AI catastrophe and extinction were outside the scope of the case.
Context: Anthropic CEO Dario Amodei straddles both visions of AI, touting his startup as a safer version of what came before while also warning AI could wipe out half of all entry-level white-collar jobs. He called AI a "serious civilizational challenge" that will "test who we are as a species."
- Nvidia CEO Jensen Huang recently argued that these apocalyptic warnings are themselves dangerous, saying the AI CEOs who use them (presumably Amodei) have "a god complex."
Driving the news: In testimony Monday, OpenAI president Greg Brockman acknowledged that he helped launch OpenAI as a nonprofit AI lab and agreed with its original promise to advance AI "to benefit humanity as a whole," free from the need to generate financial returns.
- He also acknowledged that his stake in OpenAI's for-profit arm may now be worth more than $20 billion, perhaps closer to $30 billion.
The latest: The New York Times reported Monday that the Trump administration — which has taken a laissez-faire approach to regulating AI — is considering new oversight.
- Per the report, the White House is considering creating a working group of tech execs and government officials to vet the safety of new AI models before they're publicly released.
- Axios reported other details of the emerging plan.
What we're watching: Testimony in the Musk trial continues this week to determine if OpenAI's change in structure compromised its original mission or preserved it.
Bottom line: It's all a far cry from the do-good idealism AI's founders once prided themselves on.
