Teens flock to companion bots despite risks
Add Axios as your preferred source to
see more of our stories on Google.


Nearly three-quarters of U.S. teens (72%) say they use AI for companionship, with more than half of those doing so at least a few times a month, a new survey released Wednesday finds.
Why it matters: AI companions can be dangerous to young users, posing an "unacceptable risk," according to Common Sense Media, who published the findings.
What they did: For the purposes of this research, conducted in April and May 2025, "AI companions" were defined as "digital friends or characters you can text or talk with whenever you want."
- These could include apps designed to be AI companions, like Character.AI, Nomi and Replika.
- It also includes tools like ChatGPT and Anthropic's Claude, which weren't built as companions but are still being used that way by teens.
- The nationally representative survey included 1,060 respondents aged 13-17.
- OpenAI says users must be at least 13 years old. Anthropic's terms of use require users to be 18 years or older.
Stunning stat: 34% of teens who use AI companions report that they've felt uncomfortable with something the bot has "said or done."
- Although 66% of teens said they had never felt uncomfortable chatting with a bot, Common Sense Media says "the absence of reported discomfort does not necessarily indicate safe interactions."
- "Teens may not recognize age-inappropriate content as problematic, may normalize concerning conversations, or may be reluctant to report uncomfortable experiences."
Yes, but: Most teens still prefer people to bots.
- Eighty percent of AI companion users say they spend more time with real friends.
- Over half (67%) still find AI conversations less satisfying than human conversations.
- Half of teens (50%) say they don't trust the information or advice provided by AI companions.
Some teens said that they've applied social skills that they practiced with their AI companions to real-life situations. They said the tools taught them how to start conversations, resolve conflicts and express emotions.
- But some (25%) also report sharing their real name, location and personal secrets with their AI companions.
- Nearly a quarter of teens (23%) say they trust AI companions "quite a bit" or "completely" — despite chatbots' well-documented tendency to make things up.
Between the lines: Some AI companion platforms have already been linked to troubling and dangerous teen interactions.
- Last year a Florida mom sued Character.AI, alleging that her 14-year-old son developed an emotionally and sexually abusive relationship with the chatbot that caused him to take his own life.
- Parents in Texas have also filed a lawsuit against Character.AI for encouraging a teen to kill his parents over restrictive time limits.
- Character.AI has launched tools to help parents navigate their teens' use of its platform.
Follow the money: Companion apps are big business because they're unusually effective at grabbing and holding users' attention.
- The average number of user sessions per month for companion apps is more than 10 times that of general assistant apps, content generation apps and even messaging apps, according to Andreessen Horowitz's data.
- Elon Musk's xAI launched AI companions on Monday. Grok users can now chat with an animated fox and a goth anime girl in thigh-high fishnet stockings.
- A new app called Tolan (for ages 13 and up) that will match users with an animated companion bot alien just raised $20 million in new funding. The trailer for the app seems directed squarely at teen users.
The bottom line: Common Sense recommends stronger age verification, better content moderation, expanded AI literacy programs in schools and more research into how these tools shape teen development.
Editor's note: This story has been corrected to show that the study found that more than half of U.S. teens who use AI for companionship do so at least a few times a month (not every day).
