Axios AI+

December 12, 2024
"Polarization" is Merriam-Webster's word of the year.
Today's AI+ is 1,252 words, a 5-minute read.
1 big thing: Chatbot apps pose dangers for teens
Platforms and apps that allow users to create and chat with AI-powered bots can addict teenagers, encourage self-harm and expose minors to adult content, according to experts.
Why it matters: Looser regulation of AI in the wake of the 2024 election could give freer rein to makers of problematic AI companion apps.
Driving the news: Parents in Texas on Monday filed a federal product liability lawsuit against companion app Character.AI and its founders, who have left the company.
- The lawsuit includes screenshots of a message from a "character" encouraging a teen to kill his parents over restrictive screen time limits.
- In October a Florida mom also sued Character.AI, blaming the company for her 14-year-old son's suicide.
- Character.AI spokesperson Chelsea Harrison says the company doesn't comment on pending litigation, but issued a statement saying that Character.AI aims "to provide a space that is both engaging and safe for our community."
Character.AI has recently added new safety features (see below) but this sort of app remains highly addictive, especially for teens, Common Sense Media says in its guide for parents.
- Character.AI is designed for users 13 years old and over in the U.S. and 16 years and older in Europe. Age is self-reported, and there is no age verification, which is notoriously difficult online.
Catch up quick: Chatbot companions — also called AI girlfriends or boyfriends, personalized AI, social bots, or virtual friends — have been heralded as a cure for loneliness.
- But critics say they may intensify feelings of isolation and could be especially dangerous for teenagers who already struggle with behavioral challenges.
- Even ChatGPT creator OpenAI warns against emotional reliance on chatbots. When the company released its newest model — GPT-4o — in August, it published a report explaining that users might form social relationships with AI, "possibly affecting healthy relationships."
- During testing, OpenAI "observed users using language that might indicate forming connections with the model," the report says.
How it works: Character.AI and other chatbot companion platforms allow users to create "characters" in order to chat or role-play. A Character.AI spokesperson tells Axios that users create hundreds of thousands of new characters on the platform every day.
- "The level of engagement that people have with these things is truly, truly incredible. Many, many hours a day," says Lucas Hansen, co-founder of the nonprofit CivAI.
- Hansen says the potential for companies to employ algorithms to keep a user's attention is "so much larger" than other social media because chatbot companions "get to optimize entire personalities."
- "As with any platform. I'm sure there are some users who use it more and some use it less, but the average is certainly not hours and hours a day," Dominic Perella, Character.AI's interim CEO, tells Axios.
Zoom in: The platforms, which are extremely popular with teens, often send emails intended to re-engage users, and their bots will not typically break character even when a user is in distress.
Between the lines: Many online safety experts are careful not to make value judgments about how teenagers spend their time.
- Child safety advocates have spent years claiming that music, video games and social media are inherently bad for teens, with few longitudinal studies to back up this claim.
- The key, Hansen says, is to look for power imbalances: "I think there is a pretty immense power imbalance in this case. Essentially, what you have is a whole company of really, really smart people trying to figure out how to maximize engagement, versus the mind of one person."
The other side: Over the past six months, a Character.AI spokesperson tells Axios, the company has continued investing in trust and safety, hiring more leadership roles dedicated to moderation and more engineering safety support team members.
- Character.AI also says its "characters" are fictional personas designed for entertainment rather than companionship.
The bottom line: While some big companies are focused on making generative AI safer for teens — like Google's Gemini for Teens — experts say parents and caregivers need to be having conversations with their teens about these apps.
2. Character.AI releases new safety features
Character.AI is releasing updated safety features days after parents filed a new lawsuit against the company and its founders, who now work at Google.
The big picture: The lawsuit claims that Character.AI "poses a clear and present danger to public health and safety" and calls for it to be taken offline and for its developers to be held responsible for releasing an unsafe product.
Between the lines: The conversations that users have with Character.AI "characters" are powered by a proprietary large language model. In the past month, the company says, it has developed a new model specifically for teen users.
- "The goal is to guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content," according to a company blog post.
- Character.AI says the changes will result in a different experience for teen users than what's available to adult users.
- "In certain cases where we detect that the content contains language referencing suicide or self-harm, we will also surface a specific pop-up directing users to the National Suicide Prevention Lifeline," per the blog post.
Friction point: The challenge to making these models safer is that they're designed to create fictional worlds.
- Interim CEO Dominic Perella tells Axios that Character.AI is in a "new space," meaning the consumer entertainment side of genAI, as opposed to the utility side.
- "You want your models in this part of the world to be fun to talk to," he says.
- Perella — who was the company's general counsel before the previous CEO and the president left to return to Google — tells Axios that the company wants to make the platform both "engaging and safe."
Reality check: Social media content moderation, especially when it comes to teens, means navigating an ever-changing moral minefield where malicious intent is difficult to separate from parody and satire.
- Adding the unpredictable nature of AI bots to the equation could make moderating that much trickier.
Character.AI's trust and safety head, Jerry Ruoti, says the company is working on new parental controls for the app.
- But right now there is no clear way for parents to know that their teens are using the app unless the teens disclose this information, or if parents see the apps that their children download.
The parents of the three teens in the two lawsuits against the company all said they did not know that their children were using Character.AI.
What we're watching: The company says it's working with teen safety experts and adding new reminders that chatbots are not real.
- It's also improving "time spent" notifications so that, "eventually," teens won't be able to click a box to dismiss the wellness reminders that appear after an hour-long session on the platform.
If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
3. Training data
- Meta has donated $1 million to President-elect Trump's inaugural fund. (Axios)
- Time built a chatbot that answers questions about its Person of the Year announcement and plans to expand its use in the future. (Axios)
- Military history buffs are waging war against AI data centers that threaten battlefields and other historical sites. (Axios)
- TechCrunch says it looks like OpenAI's Sora was trained on video game content — a move that could add to the company's list of legal issues.
4. + This
GCHQ, a British spy agency, has released its annual puzzle-filled Christmas card designed to both offer holiday cheer and spot future codebreakers.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and Anjelica Tan for copy editing it.
Sign up for Axios AI+




