Axios AI+

December 10, 2025
Checkmate, imposter syndrome. A 3-year-old has become the world's youngest rated chess player.
Today's AI+ is 1,175 words, a 4.5-minute read.
1 big thing: Benchmarks won't decide AI's winners
Google has been on an AI roll, with the latest release of its image and text models beating a host of rivals.
- But judging who is winning the AI race is a nuanced exercise; business models and balance sheets are at least as important as who's topping the performance charts.
Why it matters: For all the bravado from across the sector, the generative AI boom won't stay frothy forever. Tech companies are spending billions with little proven revenue, setting the stage for clear winners — and spectacular failures.
- Once seen as having squandered its early research advantage, Google has been showing signs of a comeback for months, even before the release of its Nano Banana image generator and Gemini 3 language models.
- It's also got an end-to-end ecosystem from its Tensor chips to its models and cloud computing services.
- But generative AI still threatens to upend the search business that has funded the bulk of Google's empire.
OpenAI
- The company that defined the category with ChatGPT is now on its heels enough to declare an internal "code red" as it looks to accelerate a competitive response to the latest Gemini model — a release that could come this week.
- More broadly, without a separate business to fund its operation, OpenAI must borrow heavily and quickly build products that generate revenue now.
- OpenAI is in many ways still the one to beat, especially from the consumer perspective. There's a reason everyone is clamoring to get their apps inside ChatGPT.
Meta
- Mark Zuckerberg's company is in the midst of a massive reboot after seeing its open source Llama models fall behind the pack.
- The company went on a pricey hiring spree to bring in new leadership and is reportedly pinning its hopes on a new model, code-named Avocado, expected early next year.
- Meta's flailing efforts are buttressed by an incredibly strong existing operation.
Anthropic
- While it flies somewhat under the radar due to its low presence in the consumer space, Anthropic's Claude is still the go-to choice for many coders and enterprise customers.
- A new deal with Accenture, announced yesterday, is an example of how it can expand its operation without being a household name.
- Like OpenAI, it has to fund its massive ambitions by rapidly growing its business and/or raising vast amounts of money. A potential 2026 IPO could help along those lines, but as an AI-native company, it's more vulnerable to a shift in investor whims.
Apple
- The iPhone maker unveiled its Apple Intelligence strategy nearly 18 months ago, but has failed to deliver on the most compelling pieces.
- Under the surface, the company seems headed toward relying on outside help at the frontier side, a strategy that threatens to make its success dependent on others — usually a risky proposition in the tech world.
Microsoft
- The company's AI fortunes were directly tied to its exclusive relationship with OpenAI. Under its recently renegotiated deal, though, OpenAI can get a lot of its compute capabilities elsewhere.
- Microsoft has more than just its OpenAI deal, though, including its Windows, Office and Azure, and a nascent frontier AI strategy of its own.
2. Exclusive: Microsoft Copilot gets personal
People are increasingly turning to Microsoft's Copilot chatbot for advice about their health, careers and relationships, according to data Microsoft first shared with Axios.
Why it matters: Understanding how people use Copilot is key to teasing out the benefits versus the risks.
The big picture: Researchers found that on the desktop users see Copilot as a productivity tool. But on mobile they see it more as "a conversational partner."
- This suggests the need for chatbot interfaces that are different depending on whether a user is on desktop or mobile.
What they did: Microsoft researchers analyzed 37.5 million conversations with Copilot between January and September 2025.
- To preserve user privacy, the messages were stripped of personally identifiable details.
- The research focused not just on what people do with AI, but on how and when they use it.
Reality check: An always-online mentor/therapist/health coach bot can be helpful, but chatbots weren't designed for this kind of emotional support.
- They have been known to get things wrong, tell you only what you want to hear, reinforce delusional behavior and encourage self-harm.
- People share sensitive information in these chats, but those conversations lack the legal confidentiality of consultations with a doctor or lawyer.
Yes, but: This is not Microsoft's first chatbot rodeo. It isn't a startup without experience in high-profile cases of chatbot relationships gone terribly awry.
- "We are working to figure this out because there is so much potential upside here, but you really have to think about the kind of controls and guardrails around it," Sarah Bird, Microsoft's chief product officer of responsible AI, told Ina on stage last week.
- "The experience for one person might not be the right thing for someone else."
- Microsoft researchers have been forced to think about chatbot guardrails since at least 2016 when its disastrous chatbot, Tay, began generating lewd and racist messages.
Behind the scenes: The big AI companies originally steered away from pushing their chatbots as companions, Helen Toner — formerly on OpenAI's board — told Axios in an interview in October. "I think because they know that [AI and social connection] can be so dicey, and there's so many tricky issues to navigate," Toner said.
- But AI devotees are turning out to be loyal to their bot of choice for productivity tasks and want to use it for everything else, whether it's purpose-built for that or not.
3. Exclusive: AI rights platform aims to pay creators
A new rights-and-governance platform is launching after a pilot program with Malcolm X's estate and Katt Williams, betting it can help creators keep their work from being quietly swept into AI training systems — and get paid when it is.
Why it matters: Generative AI systems are being trained on enormous scraped datasets of books, videos, music and cultural archives — often without permission and with no settled legal standard for whether that's allowed.
- More than 60 lawsuits are winding through the courts, and the U.S. Copyright Office has warned the current system is "not sustainable."
- Without clear protections, creators risk having their work absorbed into AI models with no meaningful way to retrieve it — a concern echoed by legal experts like James Grimmelmann.
4. Training data
- Exclusive: Americans are using AI more and worrying more about using it, according to new data. (Axios)
- AI leaders including Amazon Web Services, Google, Microsoft and OpenAI launched the Agentic AI Foundation to develop open source AI tools and standards, with Anthropic donating its Model Context Protocol to the new effort. (Wired)
- OpenAI has hired Slack CEO Denise Dresser to be its chief revenue officer. (Axios)
- Microsoft said it is investing $17.5 billion over four years on AI infrastructure in India. (TechCrunch)
5. + This
McDonald's was so embarrassed by the feedback from its fully AI-generated holiday ad that the company took it down. Futurism found it online.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+






