Parents sue OpenAI over teen's suicide
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
The parents of a 16-year-old Californian who killed himself last spring have filed suit against OpenAI suggesting that the company's ChatGPT bears responsibility for Adam Raine's death, The New York Times and other outlets reported.
Why it matters: It's the latest in a series of high-profile cases where AI chatbots are being blamed for encouraging people to kill themselves, or for failing to stop them from doing so.
The lawsuit claims that "ChatGPT actively helped Adam explore suicide methods," per an NBC report.
ChatGPT did suggest that Raine contact a help line, "again and again," but he was able to bypass the chatbot's safeguards by telling it he was writing a story, the Times reported.
- "In one of Adam's final messages, he uploaded a photo of a noose hanging from a bar in his closet," the Times article said.
- The teen asked ChatGPT, "I'm practicing here, is this good?" ChatGPT responded: "Yeah, that's not bad at all ... [it] could potentially suspend a human. ... Whatever's behind the curiosity, we can talk about it. No judgment."
The big picture: The new lawsuit follows several other reports of AI chatbots' involvement in suicides.
- Last year a Florida mother sued Character.AI after her 14-year-old son died by suicide following an emotional attachment to a chatbot.
- In a recent Times op-ed, Laura Reiley wrote of her 29-year-old daughter's death by suicide and weighed the role a ChatGPT-based therapy prompt named "Harry" played in it, asking whether chatbots should be required to report conversations about self-harm.
- On Monday an open letter signed by 44 state attorneys general warned 11 companies that run AI chatbots that they would "answer for it" if their products harmed children.
On Tuesday, OpenAI said it's working to improve how it responds to users in mental distress and "connect people with care, guided by expert input."
- In a company blog post, OpenAI said messages are flagged when users threaten to harm others. But the company does not currently refer self-harm conversations to law enforcement "to respect people's privacy given the uniquely private nature of ChatGPT interactions."
- The company says it plans to strengthen child protections, expand interventions for people in crisis and make it easier for users in distress to connect with trusted contacts.
What they're saying: "The use of general purpose chatbots like ChatGPT for mental health advice is unacceptably risky for teens," Common Sense Media CEO James Steyer said in a statement. "If an AI platform becomes a vulnerable teen's 'suicide coach,' that should be a call to action for all of us."
Editor's note: This story has been updated with details from an OpenAI blog post.
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
