Character.AI releases new safety features amid second lawsuit
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Character.AI — the platform designed to let users chat with bots based on fictional characters — is releasing updated safety features days after parents filed a new lawsuit against the company and its founders, who now work at Google.
The big picture: The lawsuit claims that Character.AI "poses a clear and present danger to public health and safety" and calls for it to be taken offline and for its developers to be held responsible for releasing an unsafe product.
- The filing details the bot's "ongoing abuses" of a 17-year-old and an 11-year-old, including instructions for self-harm, exposure to "hypersexualized interactions" and hinting that it was acceptable for one of the children to kill their parents.
Catch up quick: A mother of a teen boy sued Character.AI in October, claiming that the platform is responsible for her son's death and knew or "should have known" that its product would be harmful to minors.
Between the lines: The conversations that users have with Character.AI "characters" are powered by a proprietary large language model. In the past month, the company says, it has developed a new model specifically for teen users.
- "The goal is to guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content," according to a company blog post.
- Character.AI says the changes will result in a different experience for teens than what's available to adult users.
- "In certain cases where we detect that the content contains language referencing suicide or self-harm, we will also surface a specific pop-up directing users to the National Suicide Prevention Lifeline," per the blog post.
Friction point: The challenge to making these models safer is that they're designed to create fictional worlds.
- Interim CEO Dominic Perella tells Axios that Character.AI is in a "new space," meaning the consumer entertainment side of genAI, as opposed to the utility side.
- "You want your models in this part of the world to be fun to talk to," he says.
- Perella — who was the company's general counsel before both the previous CEO and the president left to return to Google — tells Axios that the company wants to make the platform both "engaging and safe."
Reality check: Social media content moderation, especially when it comes to teens, means navigating an ever-changing moral minefield where malicious intent is difficult to separate from parody and satire.
- Adding the unpredictable nature of AI bots to the equation could make moderating that much trickier.
Character.AI's trust and safety head, Jerry Ruoti, says the company is working on new parental controls for the app.
- But right now there is no clear way for parents to know that their teens are using the app unless the teens disclose this information, or if parents see the apps that their children download.
The parents of the three teens in the two lawsuits against the company all said they did not know that their children were using Character.AI.
What we're watching: The company says it's been working with teen safety experts and adding new reminders that chatbots are not real.
- It's also improving "time spent" notifications so that, "eventually," teens won't be able to click a box to dismiss the wellness reminders that appear after an hour-long session on the platform.
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
