Axios AI+

December 03, 2025
RIP to Claude, the California Academy of Sciences' beloved albino alligator who died at age 30. Today's AI+ is 1,227 words, a 4.5-minute read.
1 big thing: What's keeping Altman up at night
OpenAI CEO Sam Altman is feeling three prongs of increased pressure that have him seeing red: Wall Street, chatbot users and Google.
Why it matters: They're all testing a CEO known for staying cool at a time when his competitive advantage looks like it's under threat.
- That reportedly inspired him to declare a "code red surge" to employees Monday to focus on improving ChatGPT.
Here's what's keeping Altman up at night:
1. Money
OpenAI has long underestimated how many people would use ChatGPT and how expensive that training and usage would become.
- The original Microsoft deal helped cover those costs. Now that the partnership has been restructured, OpenAI will have to generate far more revenue on its own to fund future training and inference.
- OpenAI has committed to spending $1.4 trillion in infrastructure and says it hopes to build a gigawatt of new capacity per week, at a cost of around $20 billion per gigawatt. But when questioned about those figures, Altman has gotten defensive of late.
- You don't have to understand or even believe in the AI bubble to know that allegations of circular investing, mounting debt and a weakening job market are rattling the industry — to say nothing of a blowup over even the hint of a federal backstop for AI companies.
2. Safety
Altman has repeatedly expressed surprise that people would use ChatGPT as a therapist, confidant and romantic partner.
- Facing multiple lawsuits from families whose teens and other loved ones got bad advice while in crisis, OpenAI has added parental controls and other mental health guardrails.
- These updates have neither stopped the lawsuits nor appeased users as a whole.
- The company's latest model — GPT-5 — had a bumpy rollout, and users openly rebelled, calling the new model "the lobotomization of GPT-4o" and accusing the company of "psychological paternalism."
3. Gemini
The real threat is Google's Gemini, which has the money, data and chips to compete with OpenAI on an entirely different level.
- Google was caught flat-footed when OpenAI released ChatGPT three years ago, but the search giant is finally catching up.
- The company released Gemini 3 Pro last month, the latest version of its AI model that will power the company's core search engine and the Gemini app.
- Salesforce CEO Marc Benioff heaped praise on Gemini 3, saying "I'm not going back."
For the record: OpenAI pointed Axios to a thread on X from ChatGPT head Nick Turley, which said in part: "Our focus now is to keep making ChatGPT more capable, continue growing, and expand access around the world — while making it feel even more intuitive and personal."
The intrigue: The one place OpenAI isn't facing pressure? The White House.
- President Trump's aggressive pro-AI agenda has been a great gift to OpenAI.
- Longtime Altman antagonist Elon Musk — no longer aligned with Trump — has also been unable to slow OpenAI, despite repeated attempts.
What we're watching: Gemini's app downloads are catching up to ChatGPT's.
- Estimates show Gemini's generative AI market share doubling in the last year and beating out OpenAI in speed of brand growth.
- Google's long lead in search, user data and product distribution shows no signs of slowing.
The bottom line: OpenAI's next few months will show whether Altman can steady the company under pressure or turn it into the next Betamax.
2. Amazon offers custom-trained models
Amazon Web Services announced a flurry of new chips and models yesterday along with a new offering that allows businesses to integrate their own data to train custom versions of frontier models.
Why it matters: It's the latest step in Amazon's effort to define itself as more than just a low-cost cloud provider to run other companies' models.
Driving the news: Announced at its re:Invent conference in Las Vegas, Nova Forge allows companies to inject their own data at various stages of training.
- Amazon says the feature offers enterprises custom AI models with more industry-specific knowledge.
What they're saying: In an exclusive interview with Axios, AWS CEO Matt Garman said the company is delivering on an oft-expressed need.
- "What I hear over and over and over again is 'What I would really love is a frontier agent or a frontier model that actually just understands my data,'" he said.
The big picture: Whereas it recently seemed as if OpenAI was running away with the AI race, Google — and now Amazon — are showing that there is plenty of competition.
3. AI firms flunk existential risk planning
None of the leading AI companies have adequate guardrails in place to prevent catastrophic misuse or loss of control of their models, according to the Winter 2025 AI Safety Index, out today from the Future of Life Institute.
Why it matters: AI companies are desperately chasing artificial general intelligence (AGI) and superintelligence, with the promise of surpassing humans someday.
- The potential for uncontrolled or destructive outcomes grows as models become more powerful.
The big picture: The Future of Life Institute is a nonprofit that releases regular safety assessments of leading AI companies.
- Anthropic had the highest overall score, but still received a grade of "D" for existential safety, meaning the company doesn't have an adequate strategy in place to prevent catastrophic misuse or loss of control.
- This is the second report in a row where no company received better than a D on that measure.
- All the AI firms except for Meta, DeepSeek and Alibaba Cloud responded to a list of questions provided by the institute, which allowed each company to provide additional information about its safety practices.
What they're saying: Leaders at many of the companies have spoken about addressing existential risks, per the report.
- This "rhetoric has not yet translated into quantitative safety plans, concrete alignment-failure mitigation strategies, or credible internal monitoring and control interventions," researchers wrote.
Between the lines: Anthropic and OpenAI scored A's and B's on information sharing, risk assessment and governance and accountability.
- But there was a massive and widening gap between the front three —Anthropic, OpenAI and Google DeepMind — and the rest: xAI, Meta, DeepSeek and Alibaba Cloud.
- xAI and Meta have risk-management frameworks but lack commitments to safety monitoring and have not presented evidence that they invest more than minimally in safety research, per the report.
- Even if the U.S. companies clean up their existential risk act, we're all still reliant on China or other foreign actors to do the same, Axios' Jim VandeHei and Mike Allen write.
- The Chinese models — DeepSeek, Z.ai and Alibaba — do not publish any safety framework, and therefore received failing marks for that category.
Flashback: The Future of Life Institute has been warning about runaway AI risk for years.
- In March 2023, the organization released a letter — signed by xAI owner Elon Musk — calling for a six-month pause on frontier-model development.
- That proposal was largely ignored.
The bottom line: The tension between sprinting ahead for innovation and slowing down for safety has come to define the AI age.
- Right now, the sprinters appear to be winning.
4. Training data
- The AI boom is threatening a new global crisis — shortages of vital memory chips. (Reuters)
- A Claude user got Anthropic's chatbot to bare its "soul," a 14,500-token document meant to guide its actions. (LessWrong)
5. + This
Timber! Two Stanford Trees have fallen for each other — literally.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





