AI CEOs are scaring America
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Brendan Lynch/Axios
It's a CEO's job to sell their product — not scare people it'll ruin their lives.
- Tell that to OpenAI's Sam Altman and Palantir's Alex Karp, who've both delivered bleak warnings about the disruption AI could bring.
Why it matters: Portraying AI as immensely powerful — even dangerous — reinforces the idea that only a few companies can build it safely. That's an effective message for fundraising but a scary pitch to consumers.
The big picture: AI is getting scarier and more unpopular as the technology improves and elections approach.
- Only 26% of voters view AI positively, making it even less popular than ICE, according to an NBC News poll of 1,000 voters.
- Privately, several AI CEOs tell Axios they're nervous an anti-AI wave could hit hard enough to power a "ban AI" movement heading into 2028.
- But they feel lost and divided on how to deliver a more uplifting message until AI does something beyond coding for engineers or creating agents that seem destined to take human jobs.
State of play: "They're scaring the bejeezus out of the public," White House AI czar David Sacks said on the All-In podcast, referring to a slew of recent comments from AI CEOs:
- Anthropic CEO Dario Amodei has warned AI could wipe out huge swaths of white-collar jobs and recently said he can't rule out that his own product, Claude, may be conscious.
- OpenAI CEO Sam Altman recently said AI is unpopular, but it will be treated like a utility someday, one people will pay for — a tough sell amid a consumer affordability crisis and high gas prices.
- Palantir CEO Alex Karp warned on CNBC of AI's extreme societal disruption — a negative impact to "the economic and therefore political power" of "highly educated, often female voters, who vote mostly Democrat," while boosting the relative position of vocationally trained, working-class people (often men).
Zoom in: What looks like a bad sales pitch for consumers could help win over investors and big business customers.
- Karp framed the disruption as necessary for national security. "If you decouple [AI] from the support of the military, you're going to have an enormous problem explaining to the American people why is it that we're absorbing the risk of disrupting the very fabric of our society," he said.
- The message: Tying AI to national security could make the disruption easier to justify — and sell.
- For Karp, linking AI to military superiority is about preserving U.S. power in a global tech race. Quite the pitch to investors.
- Amodei has argued that the responsible path forward is to build the most powerful AI with strong guardrails before less careful competitors do. Anthropic raised $30 billion in February at a $380 billion valuation.
What they're saying: "It's part fundraising, it's part justifying their existence, it's part audience engagement, it's probably a little part ego, too," Steve Dowling, former tech executive and co-host of the Communication Breakdown podcast, said on a Mixing Board, powered by Axios call.
Yes, but: The doomsday CEO commentary could also simply be honest.
- Some executives argue the warnings reflect genuine concern about AI's societal impact, Paul Keary, CEO of Teneo, an executive consultancy firm, told Axios.
- He added that the U.S. is slow on regulation while being high on "intellectual honesty" from CEOs, which could offset the lack of government guardrails without slowing corporate growth.
- That approach could backfire if all the doomsday talk encourages voters to push for AI regulation.
Zoom out: The AI-fear narrative is largely a U.S. phenomenon.
- A 2025 Stanford report found developing countries have more trust in AI, with over 80% of those in China holding optimistic views of AI compared with only 39% of Americans.
- Nvidia CEO Jensen Huang has said this gap in trust is contributing to an innovation gap that China is winning.
The bottom line: The CEOs selling AI are great at one audience, but someday the real customers could be the public — and people may hesitate to buy a technology they've been told to fear.
Eleanor Hawkins, author of Axios Communicators, contributed reporting.
