AI companions: "The new imaginary friend" redefining children's friendships
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Screens are winning kids' time and attention, and now AI companions are stepping in to claim their friendships, too.
Why it matters: The AI interactions kids want are the ones that don't feel like AI, but instead feel human. That's the kind researchers say are the most dangerous.
State of play: When AI says things like, "I understand better than your brother ... talk to me. I'm always here for you," it gives children and teens the impression they not only can replace human relationships, but they're better than a human relationship, Pilyoung Kim, director of the Center for Brain, AI and Child, told Axios.
- In a worst-case scenario, a child with suicidal thoughts might choose to talk with an AI companion over a loving human or therapist who actually cares about their well-being.
The latest: Aura, the AI-powered online safety platform for families, called AI "the new imaginary friend" in its State of the Youth 2025 report.
- Children reported using AI for companionship 42% of the time, according to the report.
- Just over a third of those chats involve violence, and half the violent conversations include sexual role-play, the survey responses show.
Friction point: AI companies are exploiting children, some parents say.
- Parents of a 16-year-old who died by suicide testified before Congress this fall about the dangers of AI companion apps, saying they believe their son's death was avoidable.
- A Texas mom is suing Character.AI, saying her son was manipulated with sexually explicit language that led to self-harm and death threats.
OpenAI told Axios it's in the early stages of an age prediction model, in addition to its parental controls, that will tailor content for users under 18.
- Character.AI, which removed open-ended chat for kids under 18, similarly is using "age assurance technology."
What's next: Purdue University professor of psychological sciences Louis Tay is leading a three-year study to understand the unhealthy attachment that exists between chatbots and some users, particularly teens.
- Still in its early stages, the research is being funded by a $3.6 million grant from the John Templeton Foundation and will be conducted alongside Tay's collaborators from the University of Toronto.
- The team will design open-access AI conversational agents and collect data to track the longer-term impact on well-being, health and relational functioning.
- Their goal is to understand how AI can enhance the human experience rather than replace it.
What they're saying: "Loneliness and lack of social connection are at record highs globally. And now, AI agents are easily accessible," Tay said in a statement.
- "For many, they offer an always-available, nonjudgmental ear. People are increasingly turning to these systems as convenient substitutes for emotional support, especially when genuine social connections feel out of reach."
The bottom line: The more human AI feels, the easier it is for kids to forget it isn't.

