Judges may shape AI safety before Congress does
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A wave of lawsuits alleging AI chatbots inspired violent acts is shifting the fight over AI safety into the courts.
Why it matters: The growing docket of lawsuits over AI safety could increase pressure on Congress to pass federal safety standards before states pass their own laws or judges set de facto standards through rulings.
The latest: A father filed a wrongful death lawsuit against Google last week, alleging the company's Gemini chatbot encouraged his son to plan a mass-casualty attack and later take his own life.
State of play: Claims that AI tools can reinforce delusions or push vulnerable users toward suicide are among the rare tech flashpoints that spark bipartisan alarm on Capitol Hill.
- Even without new legislation, court rulings could force tech companies to tighten safeguards.
The Google case follows other lawsuits against AI developers alleging chatbots worsened mental health crises or reinforced delusional beliefs.
- A Florida family sued Character.AI and Google after a 14-year-old boy died by suicide following heavy chatbot use. The companies settled in January.
- Another wrongful-death suit accuses ChatGPT of reinforcing delusions that led to a murder-suicide.
- These cases are among the first attempts to test whether AI companies can be held legally liable for harms tied to chatbot conversations.
What they're saying: Max Tegmark, a physicist and AI safety advocate, told Axios that the cases could spur concrete guardrails — such as requiring companies to test models for specific harms before deployment.
- These regulations, Tegmark admits, are narrower than the kinds of broad safety testing some want.
- Still, he said, the cases could break "the taboo that AI must always be unregulated."
The big picture: Legal pressure is colliding with a growing political fight over how aggressively to regulate AI.
- An open letter calling for sweeping AI safeguards drew support from an unlikely coalition of conservative media figures Steve Bannon and Glenn Beck and progressive voices including Ralph Nader and former Obama adviser Susan Rice.
The other side: At the federal level, the White House has been pushing back on state AI regulations.
- This includes a recent effort to kill Utah's AI transparency and child safety bill, HB 286, which would have forced AI developers to disclose safety and child-protection plans.
- The administration called the bill "unfixable" and contrary to its AI agenda.
Yes, but: A bipartisan coalition has been pushing online child safety legislation for years, with the latest proposals still under debate in Congress.
- Opposition to proposed laws also spans the political spectrum, with concerns that even innocuous-sounding rules around age verification can result in censorship.
- Meanwhile, advocacy groups say there is an urgent need to address the problems posed by chatbots.
- "Although President Trump and his billionaire Big Tech buddies would like to stall, or even backtrack, on regulations to protect people from AI abuses, those of us who are paying attention to these increasingly common tragedies know that action to protect the public must be accelerated," Rick Claypool, a research director with Public Citizen and the author of a recent report on AI chatbot harms, said in a statement.
Google said in a statement that its chatbots are designed not to encourage self-harm and "generally perform well in these types of challenging conversations."
- "Unfortunately AI models are not perfect," Google said, noting that in the case filed last week, Gemini referred the user to a crisis hotline multiple times.
The bottom line: As lawsuits mount, judges could force tech companies to tighten safety guardrails — even if lawmakers remain divided over federal regulation.
