Florida mom sues Character.AI over her teen son's death
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Brendan Lynch/Axios
A Florida mother is suing AI startup Character.AI after her son died by suicide following an emotional attachment to a chatbot, as reported by Mostly Human Media.
The big picture: AI companion app creators often claim that their bots are a cure for loneliness, but there's little evidence to back up this claim and the apps are still largely unregulated.
Just seconds after the algorithm-powered character told 14-year-old Sewell Setzer III to "come home" as soon as possible, Sewell took his life, according to the lawsuit filed Wednesday by his mother, Megan Garcia.
- The lawsuit states that Character.AI's chatbot service is "dangerous and untested" and its "products trick customers into handing over their most private thoughts and feelings."
- It also accuses developers of failing to create sufficient safety guardrails and questions the company's AI training data practices, stating that they "assign human traits to their model."
State of play: The chatbot was named after Daenerys Targaryen, a fictional character from "Game of Thrones."
- Users on X are reporting that Targaryen bots have disappeared and users who try to create a new one receive a message that doing so violates the company's guidelines.
- However, according to a Reddit user, you can recreate the bot as long as you don't include the word "Targaryen" in it.
- Character.AI said in a blog post Tuesday that they're adding new safety features, including changing models for minors and improving detection, response and intervention related to user inputs.
The lawsuit also names Google and Alphabet as defendants. In August Google hired the co-founders of Character.AI and bought out venture investors at around a $2.5 billion valuation.
What they're saying: "As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation," a spokesperson at Character.AI told Axios.
- For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content, the spokesperson added.
- Additionally, the spokesperson said that there is no ongoing relationship between Google and Character.AI.
- Google was not part of the development of Character.AI, according to Google spokesperson Jose Castaneda.
What we're watching: Expect an onslaught of lawsuits to determine who is responsible for what AI "does," since it's not clear whether Section 230 applies.
Go deeper: AI safety becomes a partisan battlefield
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
Read the filing in full, via DocumentCloud:
Editor's note: This story was updated with comments from Character.AI and Google, and corrected to reflect that the news of the suit was reported by Mostly Human Media.
