Tech firms, states look to rein in AI chatbots' mental health advice
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Concerns over Americans turning to AI chatbots to solve mental health problems are prompting new guardrails so people don't become too dependent on unvetted technology.
Why it matters: AI's booming popularity, the bots' reputation for delivering emotionally validating responses and a shortage of therapists are making more people turn to chatbot companions to talk through their problems.
The big picture: The bots aren't designed for those conversations, and can sometimes exacerbate mental health crises.
- A Florida teen died by suicide after developing relationships with chatbot characters on Character.AI, including one acting as a licensed therapist. His mother is suing the company.
- Northeastern University researchers found large language models can be harnessed to offer detailed instructions on how to commit suicide.
- Some users are also reportedly developing obsessions with AI chatbots, leading to severe mental health issues and a condition dubbed "ChatGPT psychosis."
Driving the news: ChatGPT will now prompt users to take a break during long sessions on the platform, OpenAI announced Monday. The company also clarified that ChatGPT should help users weigh different options when posed with a personal question, rather than giving yes or no answers.
- On the other side, Illinois Gov. JB Pritzker last Friday signed an outright ban on the use of AI systems to provide direct mental health services in the state. Nevada, Utah and New York have also passed laws regulating AI and mental health.
- Under the new law, any app or chatbot must acknowledge that it cannot provide behavioral health services or face up to a $10,000 fine. Licensed clinicians still can use AI for administrative purposes, such as compiling notes from a therapy session with the patient's consent.
Zoom in: OpenAI said it worked with more than 90 physicians across the world to build rubrics for how the chatbot should respond when someone shows signs of mental distress.
- It's also convening an advisory group that includes mental health professionals, and collecting feedback from human-computer interface researchers on possible safeguards and evaluation methods.
- The adjustments follow a decision earlier this year to roll back a ChatGPT update that made responses overly agreeable and sycophantic, which the company acknowledged could have adverse affects on users' mental health.
Mental health professionals have been sounding the alarm on unregulated AI therapy for months. The American Psychological Association in February urged the Federal Trade Commission in to put safeguards in place so generic chatbots can't impersonate therapists.
Yes, but: More than one-third of the U.S. population lives in an area where there's a shortage of mental health professionals. Many providers are dropping out of insurance networks, making it harder for many people to find care at a reasonable cost.
- AI-powered therapy, if done correctly, could make counseling more accessible.
- An outright ban on using chatbots for mental health assistance like Illinois' "really squashes development in the space," said Nick Jacobson, an associate professor at Dartmouth who studies AI and behavioral care.
- But the current regulator scheme also doesn't incentivize leading AI chatbot companies to make their products safe for mental health use cases on their own, Jacobson said.
- "I think it would require ... some new oversight institution to actually do this effectively," he said.
What we're watching: Investment firms are also continuing to bet on the success of AI therapy chatbots that are specifically designed to provide that kind of counseling. Slingshot AI, a chatbot therapy startup that does not constitute official mental health treatment, has raised $93 million so far.
- But these tools face an uphill battle if they do seek Food and Drug Administration approval. Woebot, one of the first such companies, recently shut down in part because of the expense and difficulty of meeting FDA marketing authorization standards, Stat reported.
Carrie Shepherd contributed reporting.
