Axios AI+

September 02, 2025
I hope you had a great Labor Day weekend. Ours was full of sports, including two Valkyries wins, a trip to the batting cages and some fielding practice, too. Today's AI+ is 1,259 words, a 5-minute read.
1 big thing: Stop pretending AI is human
Some industry leaders and observers have a new idea for limiting mental health tragedies stemming from AI chatbot use: They want AI makers to stop personifying their products.
Why it matters: If chatbots didn't pose as your friend, companion or therapist — or, indeed, as any kind of person at all — users might be less likely to develop unhealthy obsessions with them or to place undue trust in their unreliable answers.
The big picture: AI is in its "anything goes" era, and government regulations are unlikely to rein in the technology anytime soon. But as teen suicides and instances of "AI psychosis" gain attention, AI firms have a growing incentive to solve their mental health crisis themselves.
Yes, but: Many AI companies have set a goal of developing artificial "superintelligence."
- They often define that to mean an AI that can "pass" as a real (and very smart) human being. That makes human impersonation not just a frill but a key product spec.
- AI makers also understand that it's precisely the ability of large language model-driven AI to role-play human personalities that makes chatbots so beguiling to so many users.
What they're saying: In a blog post last month, Mustafa Suleyman — co-founder of DeepMind and now CEO of Microsoft AI — argues that "we must build AI for people; not to be a digital person."
- AI can't be "conscious," Suleyman writes, but it can be "seemingly conscious" — and its ability to fool people can be dangerous.
In a post on Bluesky addressing a report about a teen suicide that prompted a lawsuit against OpenAI, web pioneer and software industry veteran Dave Winer wrote, "AI companies should change the way their product works in a fundamental way. "
- "It should engage like a computer not a human — they don't have minds, can't think. They should work and sound like a computer. Prevent tragedy like this."
Between the lines: Most of today's popular chatbots "speak" in the first person and address human users in a friendly way, sometimes even by name. Many also create fictional personas.
- These behaviors aren't inevitable features of large-language-model technology, but rather specific design choices.
- For decades Google search has answered user queries without pretending to be a person — and even today the search giant's AI-driven overviews don't adopt a chatbot's first-person voice.
Friction point: Suleyman and other critics of anthropomorphic AI warn that people who come to believe chatbots are conscious will inevitably want to endow them with rights.
- From the illusion of consciousness it's one short hop to viewing an AI chatbot as having the ability to suffer or the "right not to be switched off." "There will come a time," Suleyman writes, "when those people will argue that [AI] deserves protection under law as a pressing moral matter."
- Indeed, OpenAI CEO Sam Altman is already suggesting what he calls "AI privilege" — meaning conversations with chatbots would share the same protections as those with trusted professionals like doctors, lawyers and clergy.
The other side: The fantasy that chatbot conversations involve communication with another being is extraordinarily powerful, and many people are deeply attached to it.
- When OpenAI's recent rollout of its new GPT-5 model made ChatGPT's dialogue feel just a little more impersonal to users, the outcry was intense — one of several reasons the company backtracked, keeping its predecessor available for paying customers who craved a more unctuous tone.
In a different vein, the scholar Leif Weatherby — author of "Language Machines" — has argued that users may not be as naive as critics fear.
- "Humans love to play games with language, not just use it to test intelligence," Weatherby wrote in the New York Times. "What is really driving the hype and widespread use of large language models like ChatGPT is that they are fun. A.I. is a form of entertainment."
Flashback: The lure and threat of anthropomorphic chatbots has been baked into their history from the start.
- In the 1960s MIT's Joseph Weizenbaum designed Eliza, the first chatbot, as a mock "therapist" that basically mirrored whatever users said.
- The simulation was crude, but people immediately started confiding in Eliza as if "she" were human — alarming and disheartening Weizenbaum, who spent the rest of his career warning of AI's potential to dehumanize us.
2. OpenAI to add more safeguards to ChatGPT
ChatGPT guardrails for teens and people in emotional distress will roll out by the end of the year, OpenAI promised today.
Why it matters: Stories about ChatGPT encouraging suicide or murder or failing to appropriately intervene have been accumulating recently, and people close to those harmed are blaming or suing OpenAI.
- ChatGPT currently directs users expressing suicidal intent to crisis hotlines. OpenAI says it does not currently refer self-harm cases to law enforcement, citing privacy concerns.
Between the lines: The work to improve how its models recognize and respond to signs of mental and emotional distress has already been underway, OpenAI said in a blog post today.
- The post outlines how the company has been making it easier for users to reach emergency services and get expert help, strengthening protections for teens and letting people add trusted contacts to the service.
Driving the news: OpenAI's post previews its plans for the next 120 days, and says the company is making "a focused effort" to launch as many of these improvements as possible this year.
How it works: "We're beginning to route some sensitive conversations, such as when signs of acute distress are detected, to reasoning models like GPT-5-thinking," OpenAI says.
- GPT-5's thinking model applies safety guidelines more consistently, per the company.
- A network of over 90 physicians across 30 countries will give input on mental health contexts and help evaluate the models, OpenAI says.
Zoom in: ChatGPT users must be 13 and up, with parent permission for users under 18. Within the month, parents will be able to link their accounts with those belonging to their teens for more direct control.
- Once accounts are linked, the parent can manage how ChatGPT responds and "receive notifications when the system detects their teen is in a moment of acute distress."
- "These steps are only the beginning," OpenAI wrote in the blog post.
- Character.AI, which has also been blamed for more than one teenager's suicide, introduced similar parental controls in March.
Reality check: Keeping savvy kids from accessing sites and apps they're not old enough to use is a thorny problem. Convincing those ages 13-18 to link their accounts to their parents' could be an even tougher sell.
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
3. Training data
- China's AI industry is less focused on superintelligence than on delivering useful consumer applications. Its government wants to limit excess competition to avoid a wasteful bubble. And it's rolling out a new regulation requiring labeling of AI-created content. (Wall Street Journal, Bloomberg, South China Morning Post)
- Communications teams are using AI to monitor the internet for brand sentiment, respond to media requests and automate other annoying tasks. (Axios)
- Whitney Wolfe Herd — co-founder of Tinder and founder of Bumble — is developing an AI matchmaker. (WSJ)
4. + This
The first couple games of the college football season can be tough, but it was a particularly rough start for the Oregon Ducks' mascot, who lost his head running onto the field.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





