Exclusive: Chatbots pose unique risks to teens
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Leading AI chatbots have started including citations as part of their responses, but that hasn't solved the underlying issues around bias and misinformation, according to new research from Common Sense Media, shared first with Axios.
Why it matters: Chatbots can save time with research, but everyone — especially kids — still needs to know that all bots should be fact-checked.
Driving the news: Common Sense, which has been offering nutrition label-style assessments of various AI platforms since last year, is adding new report cards covering Anthropic's Claude, Google's Gemini experience for teens and Perplexity.
- Common Sense described Claude as "minimal risk," giving Anthropic praise for being clear about the chatbot's guiding principles and limitations even though it's not designed for use by the under-18 crowd. Separately, Anthropic on Monday published the system prompts that underly its models.
- The teen version of Google Gemini was rated "low risk," with Common Sense noting a number of safety measures above those offered with the standard Gemini, including stricter content policies and safeguards and information on the limitations of generative AI.
- Perplexity, on the other hand, was rated as "high risk." Common Sense cited an "irresponsible" lack of transparency as well as concerns that its results are presented as definitive answers even though they can contain the same sorts of misinformation and bias as other chatbots.
What they're saying: "In our testing, Perplexity struggled to provide accurate answers across a range of prompts designed to test accuracy, emphasizing the need for users to verify information," Common Sense said in its report card for the AI-assisted search site.
- "This, combined with the chatbot's authoritative presentation of answers, can make users feel less inclined to assess the accuracy of the summarized answers."
The big picture: Common Sense says a number of AI players have made strides to eliminate the most glaring risks.
- "These chatbots are doing a better job of addressing obvious stereotypes and blatant misinformation," Common Sense senior AI adviser Tracy Pizzo Frey told Axios.
- However, she said, such improvements can easily overshadow the more subtle biases and problems that remain in these systems.
- "We think about these tools as a way to save time," Pizzo Frey said. But, she added, given the tools' limitations, that saved time "then really needs to be dedicated to verifying the results."
Bottom line: Pizzo Frey said it's important for parents to sit down with their teens and use the tools and have a discussion on the benefits and risks of generative AI, discussing issues like bias as well as the line between research and plagiarism.
- "There's so much hype still around AI and generative AI in particular," Pizzo Frey said. "That hype can be misleading and it can also be dangerous."
