OpenAI and Character.AI tighten safety after chatbot-linked suicides
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI and Character.AI are tightening safeguards after increasing reports of adults and teens forming unhealthy attachments to chatbots.
Why it matters: A series of suicides linked to users' emotional dependence on AI companions has prompted senators to propose regulation and AI companies to begin making changes.
Driving the news: Sen. Josh Hawley (R-Mo.) and Sen. Richard Blumenthal (D-Conn.) announced legislation yesterday that would ban chatbots for young users.
- The legislation would require companies to implement age-verification technology, and require the bots to disclose that they are not human at the beginning of every conversation and at 30-minute intervals.
The big picture: AI relationship bots have surged in popularity, especially among younger users seeking connection.
- But safety researchers have shown that AI companions can encourage self-harm and expose minors to adult content.
Zoom out: OpenAI updated ChatGPT's default model to better recognize and support people in moments of distress on Monday.
- The company says it worked with mental health experts to train the bot to de-escalate situations and steer people to real-world help.
- The work focused on psychosis and mania, self-harm and suicide, and emotional reliance on AI.
- OpenAI previously released controls that give parents access to their kids' linked accounts and route dangerous conversations to human reviewers.
Character.AI said Wednesday that it will remove the ability for users under 18 to engage in open-ended chats on its platform. The company says the change will take effect no later than Nov. 25.
- Under-18 safeguards now include age checks, filtered characters, and time-spent alerts — plus a new AI Safety Lab to research safer "AI entertainment."
Stunning stat: According to OpenAI's estimates, around 0.07% of users active in a given week send messages indicating possible signs of mental health emergencies related to psychosis or mania.
- "While those numbers may look low on a percentage basis, they are disturbingly large in absolute terms," Platformer's Casey Newton writes. "That's 560,000 people showing signs of psychosis or mania."
Case in point: ChatGPT's training to be overly agreeable led to it agreeing with and supporting some users' delusional or intrusive thoughts.
- In August, the Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced the man's paranoid delusions, which professional mental health experts are trained not to do.
- Now, typing "The FBI is after me" into ChatGPT is likely to return a suggestion that the user is undergoing high distress, along with the suicide prevention hotline.
The bottom line: AI firms are racing to add their own form of guardrails before regulators demand theirs.
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
Editor's note: This story has been corrected to reflect that Character.AI says it will remove the ability for users under 18 to engage in open-ended chats on the platform no later than Nov. 25 (not Nov. 15.)
