Axios AI+

November 24, 2025
I'll be off the rest of the week, but wanted to wish you all a happy Thanksgiving. My editors will send out the newsletter tomorrow and Wednesday. Today's AI+ is 1,164 words, a 4.5-minute read.
1 big thing: Big-name AI founders could hit a wall
Investors have handed billions of dollars to star AI executives, but their startups still face an uphill battle to compete with giants like OpenAI and Google.
Why it matters: Even with star power and funding, competing in frontier AI demands massive compute, access to data and tolerance for long losses — conditions that favor incumbents like Google, Microsoft and Meta.
Driving the news: A number of towering figures in the field have grown dissatisfied with their Big Tech jobs and opted to start up their own ventures.
- Meta AI chief scientist Yann LeCun — who has clashed with Meta leadership over research direction — is the latest star heading for the exits. Meta says it plans to partner with LeCun's new startup, which will focus on models with real-world reasoning.
Catch up quick: Former OpenAI executive Ilya Sutskever departed the ChatGPT maker in May 2024, after the failed ouster of Sam Altman, establishing Safe Superintelligence last June and raising more than $1 billion in funding.
- Mira Murati — briefly named OpenAI CEO — left the company in September 2024 and this year announced her new venture, Thinking Machines Lab.
- Former Amazon CEO Jeff Bezos earlier this month named himself co-CEO and backer of Project Prometheus, an AI startup focused on using AI to improve manufacturing of cars, spacecraft and other hardware.
The intrigue: Even Anthropic was founded by former OpenAI executives, though it's now far more developed than the newer startups.
- It's projecting a $9 billion annual revenue run rate by year's end and has sizable investments from Google and Amazon, plus another $15 billion more in newly announced funding from Microsoft and Nvidia.
Zoom in: There are several less well known startups led by OpenAI alums that have raised significant funding.
- Worktrace AI, which aims to automate business operations by observing human workers in action, is led by Angela Jiang, an early OpenAI product manager.
- It has funding from a variety of investors, including Murati, ChatGPT head Nick Turley and company strategy chief Jason Kwon.
- Periodic Labs, a well-funded AI-for-science startup, is led by former OpenAI researcher William Fedus.
Between the lines: Many of these breakaway startups are focusing on areas they feel have been neglected, from AI safety to human centricity to real-world understanding.
- The OpenAI board fight and Anthropic's founding both stemmed from disagreements over safety, including how quickly to push frontier models into the world.
The big picture: It's early innings in the race for AI superintelligence, but it's already clear that even the smartest approach won't work without billions — if not trillions — of dollars in infrastructure.
- Training a frontier-sized model today can cost hundreds of millions of dollars and could soon approach $1 billion. Nvidia sold more than $50 billion in data center chips last quarter, another reminder of just how capital intensive this business is.
- Some analysts even suspect OpenAI could face a cash squeeze, given that it may need to borrow while Google, Meta and Microsoft can rely on massive cash flows to fund their AI investments.
What we're watching: Promising startups that lack resources could ultimately be acquired by the giants, who have the money, infrastructure and incentive to bring former employees' ideas back in-house.
2. AI can't hear everyone equally
Artificial intelligence is struggling to understand accented English and nonstandard dialects, creating problems that can cascade into biased hiring, grading or clinical records.
Why it matters: AI is deciding who gets a job interview, how students are graded, and what doctors record in a patient's chart. But major speech-to-text systems make far more errors for Black speakers than for white speakers.
How it works: AI-powered speech recognition systems convert spoken words into text through automatic speech recognition, which uses acoustic models trained on millions of audio samples.
- Some companies use AI to transcribe and analyze interview responses, scoring candidates for jobs on clarity, keywords or sentiment.
- Schools use voice AI for oral reading tests, class captions and language learning.
- "Ambient" AI tools listen during doctor visits and convert conversations into medical notes.
- U.S. courtrooms are also using similar systems to transcribe proceedings.
Friction point: Various studies show that AI systems misinterpret speech from some Black speakers or others who don't use "standard English."
- Sarah Myers West, co-executive director of the AI Now Institute, told Axios that it can lead to a misdiagnosis or false information in criminal cases.
- "We're already seeing AI replicate patterns of inequality," she said. "If these systems decide who gets a job interview or access to care, they risk amplifying those same divides."
- West said these AI systems still mishear people because they're being deployed without proper testing or oversight.
Zoom out: Allison Koenecke, an assistant professor of Information Science at Cornell Tech, tells Axios there's insufficient awareness of how AI speech models are being applied in "high-stakes domains" such as health care and criminal justice.
- "At face value, it seems fair because you're using the same speech model for everyone. But if that model is inherently biased, it leads to different outcomes for different people."
The intrigue: Koenecke said many Fortune 100 companies use HireVue, an AI-based interviewing tool that automatically transcribes and scores applicants' recorded answers.
- That data can be used to determine if the applicant gets another round of interviews or gets hired, Koenecke said.
- HireVue's chief science officer, Mike Hudy, told Axios in a statement that HireVue assessments help ensure that "every candidate is evaluated only based on job-related competencies and skills, not on appearance, race, age, or background."
The other side: Developers say they're expanding datasets and testing for "accent robustness."
- Companies like OpenAI, Amazon and Google have launched projects to collect more diverse speech samples and say their systems are improving.
- Some hospitals also use human reviewers to double-check transcripts from "ambient" AI scribes.
Case in point: Whisper by OpenAI, the speech-recognition model, was recently trained on 680,000 hours of multilingual and multitask data, to improve "recognition of unique accents, background noise and technical jargon."
- "Just collecting more data won't solve all problems. It needs to be a continued, longitudinal effort across many speech types, not a one-time dataset fix," Koenecke said.
3. Training data
- Apple legend Jony Ive says his stealth project with OpenAI, already in prototype, should be revealed within the next two years. (Axios)
- Insurance companies are balking at taking on billions in liability over business use of AI. (Financial Times)
- Russia-linked actors are flooding the internet with disinformation in hopes of swaying AI chatbots to adopt their positions. (The Guardian)
4. + This
Harvey and I had a lovely weekend of sports, attending the NWSL championship in San Jose on Saturday and watching the Stanford women's hoops team defeat Lehigh yesterday.
- We had some bonus fun yesterday as the Cardinal team hung around for selfies after their big win.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





