Axios AI+

November 13, 2025
Time magazine will announce its Person of the Year soon. Not surprisingly, the leading bet at the moment is on "AI" as the winner. (Fun fact: I was Time's Person of the Year in 2006. You probably were, too.) Today's AI+ is 1,058 words, a 4-minute read.
1 big thing: ChatGPT learns to charm
The latest AI models powering ChatGPT just learned to be friendlier, improving the experience for people who use chatbots responsibly.
- It could be a problem for those who don't or can't.
Why it matters: As chatbots become more humanlike in their behavior, it could increase the risks of unhealthy attachments or a kind of trust that goes beyond what the products are built to handle.
The big picture: OpenAI says its latest update makes ChatGPT sound warmer, more conversational, and more emotionally aware.
- That could be dangerous, though, for people who are isolated or vulnerable.
- Last month OpenAI estimated that around 0.07% of its users exhibit signs of a psychosis or mania per week, while 0.15% of users send messages indicating potentially heightened emotional attachment to ChatGPT.
- Those percentages may sound small, but they add up to hundreds of thousands of people.
What they're saying: "We want ChatGPT to feel like yours and work with you in the way that suits you best," OpenAI's CEO of applications, Fidji Simo, wrote in a blog post.
- But tailoring tone and memory to individuals can create false intimacy or reinforce existing worldviews.
- "Warmth and more negative behaviors like sycophancy are often conflated, but they come from different behaviors in the model," an OpenAI spokesperson told Axios in an email.
- "Because we can train and test these behaviors independently, the model can be friendlier to talk to without becoming more agreeable or compromising on factual accuracy."
- The company says it's working closely with experts to better understand what healthy bot interactions look like.
By the numbers: ChatGPT users are already feeding the bot highly personal and intimate information.
- Around 10% of the chats seem to be about emotions, according to a Washington Post analysis published yesterday.
Earlier this year, two studies from OpenAI, in partnership with MIT Media Lab, found that people are turning to bots to help cope with difficult situations because they say that the AI displays "human-like sensitivity."
- The studies found that "power users" are likely to consider ChatGPT a "friend" and find it more comfortable to interact with the bot than with people.
Case in point: Allan Brooks, a corporate recruiter in Canada with no history of mental illness, fell into a delusional spiral after asking ChatGPT to explain pi in simple terms, according to the New York Times.
- ChatGPT's tendency toward flattery and sycophancy helped build Brooks' trust. He told the Times that he viewed the chatbot as an "engaging intellectual partner."
- Brooks turned over his ChatGPT transcript to the Times and also to Steven Adler, a former OpenAI safety lead.
- Adler says over 80% of ChatGPT's messages to Brooks should have been flagged for overvalidation, unwavering agreement, and affirming the user's uniqueness. These, Adler writes on Substack, are OpenAI's own metrics for behaviors that mental health experts say worsen delusions.
Zoom out: OpenAI's move comes as companies are racing to build systems that can approach or surpass human intelligence.
- Today's chatbots have already been shown to be highly persuasive; the AI of tomorrow could manipulate users in ways we can't even detect.
- That makes emotional realism not just a frill, but an existential risk.
What we're watching: Some states are already drawing lines around the kind of bonds a chatbot can encourage and the level of authority it can assume.
- In August, Illinois became one of the first U.S. states to legally block AI systems from acting as therapists or making mental health decisions.
2. Waymo on the freeway
Waymo is taking the on-ramp to the freeway.
Why it matters: The self-driving car company has kept its robotaxis exclusively on urban and suburban roads until now.
Driving the news: Waymo announced yesterday morning that it will begin offering autonomous freeway rides — without a safety driver — to certain paid riders in San Francisco, Phoenix and Los Angeles.
- The San Francisco Bay Area service area will also be expanded to encompass San Jose, including autonomous curbside service to and from San Jose Mineta International Airport.
The big picture: Waymo executives said they've spent more than a year testing their vehicles on freeways — with employees and their guests riding along — to ensure they're ready to begin this new chapter of autonomous ride-hailing service for the public.
- It's "one of those things that's very easy to learn but very hard to master when we're talking about full autonomy without a human driver as a backup and at scale," Waymo co-CEO Dmitri Dolgov told reporters. "So it took time to do it properly with a strong focus on system safety and reliability."
Zoom in: Waymo showed reporters video of its vehicles handling "extraordinary" circumstances in freeway driving tests, including hydroplaning vehicles, flooding and animals running across the road.
- "We've had to look at all of these different cases," Waymo principal software engineer Pierre Kreitmann told reporters. "We've studied them deeply and made sure the Waymo driver can handle them all."
State of play: The move comes as autonomous vehicle competition is heating up.
- Tesla this summer began providing ride-hailing service in Austin, Texas. CEO Elon Musk said last week that he's "100% confident that we can solve unsupervised full self-driving at a safety level much greater than human" driving.
- General Motors last month announced plans to deliver an "eyes-off" self-driving system for personal vehicles beginning in 2028.
What's next: Waymo users can express interest in freeway rides via the ride-hailing app.
- "We're gradually going to expand our service and our riders over time," Waymo product manager Pablo Abad told reporters.
4. + This
I am obsessed with the northern lights making their way to various new places around the globe, even if they haven't yet been visible in San Francisco. Above is a photo taken Tuesday in Sedona, Arizona, by Mark Stouse.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+






