Axios Future of Health Care

February 06, 2026
Good morning. Staying on the subject of the weather, I'm at the phase of winter where high 30s feels tropical. Thoughts and prayers.
- We're talking about something new (at least for me) this week — hit reply to let me know what you think!
Today's newsletter is 1,112 words, a 4-minute read.
1 big thing: The dangers of using AI for mental health
Millions of Americans are using AI chatbots as therapists, despite alarming evidence that it may be unsafe to do so.
Why it matters: America's mental health system isn't meeting the country's need for care. But while chatbots could be consequential treatment tools, they've also proven they can also be downright dangerous.
Driving the news: The Johns Hopkins Bloomberg School of Public Health hosted a timely event this week on AI and mental health that I tuned into.
- It featured a panel of experts and the author of a NYT essay published last year titled "What My Daughter Told ChatGPT Before She Took Her Life."
- My takeaway was that this topic is much higher stakes and more complicated than I'd previously understood.
The big picture: The demand for mental health care in the U.S. is enormous. But there are documented harms associated with outsourcing that care to mass-consumer chatbots, whether or not users realize that's what they're doing.
- And at the same time, it seems that cat is completely out of the bag — we all have access to an affirming, empathetic AI voice that's free and available whenever it works best for us.
- While that's proven useful in some situations, society for now is playing catch-up.
Where it stands: "What is so interesting is when you have an always available, nonjudgmental bot, something that you can connect with and say whatever you want to say, a lot of people use it," Thomas Insel, a former director of the National Institute of Mental Health and a member of the panel, told me afterward.
- "What we're learning is that for hundreds of millions of people, they're voting with their feet, and they're really liking what they're finding in these large language models," he added.
By the numbers: One survey in the American Psychological Association's journal found that nearly half (49%) of Americans with ongoing mental health conditions have used large language models in the past year for psychological support.
- They were most commonly used to help with anxiety and depression and for personal advice.
- More than a third of respondents said they found LLMs more beneficial than traditional therapy, and only 9% reported encountering harmful responses.
Yes, but: That widespread use is juxtaposed against stories of users who have died by suicide or devolved into something known as "AI psychosis."
- A fundamental problem is that bots like ChatGPT and Claude are designed to keep people engaged, and tell them what they want to hear without judgment.
- "You can say something totally stupid and it tells you what a great idea it was," Insel said. "People who are good therapists are helping you to change what you think, how you feel, how you behave — and that's just not what chatbots do."
What they're saying: The APA published a health advisory late last year warning against overreliance on AI chatbots or "wellness apps" as a replacement for a qualified mental health provider.
- "At present, there is no consensus in the literature to support that GenAI chatbots and wellness apps possess essential qualifications and abilities required to provide mental health care, diagnostics, feedback, or even advice in most cases," the advisory states.
- Some of the people most at risk are the ones who most need high-quality mental health care, because the bots can "act as powerful amplifiers of preexisting vulnerabilities," per the APA.
The other side: That doesn't mean chatbots can't be useful, especially with the correct designs and guardrails.
- "It would be a tremendous mistake to say these things are only going to be dangerous, they'll never be safe, they'll never be effective, and we should make sure nobody has access to this," Insel said.
2. In their own words
Some of the most eye-opening information about the extent of problem is coming from AI companies that have taken steps to make their products safer.
Between the lines: The tech industry is facing twin threats of government regulation — especially at the state level, for now — and litigation.
In October, OpenAI published an article titled "Strengthening ChatGPT's responses in sensitive conversations," laying out how the company improved the bot's default model to better respond to people in distress.
- It categorized the problem in three buckets: mental health concerns like psychosis or mania, self-harm and suicide, and emotional reliance on AI.
- It estimated that, in a given week, 0.07% of users indicate possible signs of mental health emergencies related to psychosis or mania; 0.15% "have conversations that include explicit indicators of potential suicidal planning or intent"; and the same ratio "indicate potentially heightened levels of emotional attachment to ChatGPT."
- Those are small percentages. But if applied to 800 million weekly users, they add up.
Anthropic, which developed the chatbot Claude, put out a paper last week analyzing patterns of what it's termed "disempowerment," or when AI interactions reduce "individuals' ability to form accurate beliefs, make authentic value judgments and act in line with their own values."
- It found that what it calls "reality distortion," which was the most common form of severe disempowerment, occurred in roughly 1 in 1,300 conversations. Milder cases were much more common.
- A blog post describing the paper laid out a pretty straightforward explanation of why this matters: "Concerns about AI undermining human agency are a common theme of theoretical discussions on AI risk."
- It notably also concludes that "the rate of potentially disempowering conversations is increasing over time."
Where it stands: A handful of states have already passed laws related to mental health and AI, though the Trump administration has said it plans to sue to overturn certain state-level AI laws.
- Seven lawsuits filed against OpenAI have alleged that plaintiffs' loved ones were harmed by their use of ChatGPT, including claims of wrongful death, assisted suicide and involuntary manslaughter, the WSJ reported.
My thought bubble: A lot of the conversation around AI in health care is dependent on adoption by a heavily regulated health system that is extremely resistant to change. That won't happen overnight, to say the least.
- This is different. This is adoption by consumers themselves, with the immediate prospect of very serious consequences.
- Of course, there are positives, too. People, for better or worse, generally like what they're getting. And use cases will only improve with better models.
- But for now, it seems to amount to playing with fire.
Thanks to Adriel Bettelheim and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios Future of Health Care



