AI chatbots loom over tech and social media lawsuits
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Maura Losch/Axios
Social media companies are heading to court over kids' mental health — and a New Mexico judge has just demanded Meta produce chatbot records in one key lawsuit, Axios has learned.
Why it matters: Lawsuits targeting addictive algorithms and the mental health impacts on children are just the start of a new legal era where AI could be the wild card.
State of play: At a Nov. 7 hearing, New Mexico Attorney General Raúl Torrez won a fight against Meta to include AI chatbot records in a lawsuit that's headed to trial in February, according to court documents. It's not yet clear what records Meta will need to produce.
- The case began in 2023 over Meta's allegedly addictive algorithms on social media feeds and design choices that fail to protect children from predators online.
- "This is entirely infeasible with just two months until the close of fact discovery and four months until trial," Meta wrote in an Oct. 10 filing against providing the chatbot records.
- In addition to the timing, Meta argues in its filing that chatbots are not relevant and outside the scope of the case.
In California, juries will soon hear cases targeting social media's addictive algorithms and design features, such as endless scroll.
- Chatbots are not an official part of these addiction cases.
- But Laura Marquez-Garrett of the Social Media Victims Law Center said that public consciousness around generative AI is shifting to better understand how companies can cause harm, and that will inevitably have an impact on the addiction litigation.
- "The jury is going to inherently understand these companies are hurting kids through artificial intelligence driven tools," she said.
Yes, but: Don't expect chatbot-related evidence to be submitted in the California lawsuits, or questions for Meta CEO Mark Zuckerberg and other Big Tech CEOs set to testify on that topic. The issues are about specific design features, like autoplaying videos and the infinite scroll.
- While any shift in broader public consciousness has its limits in an ongoing court proceeding, lawyers, lawmakers and regulators are gearing up for the new legal landscape that AI has created.
The big picture: Lawsuits against chatbot makers are now cropping up, focused on mental health impacts on minors and consumers of all ages.
- The Federal Trade Commission has also opened an inquiry into AI chatbot safety, demanding information from seven companies about negative effects of chatbots used by teens and children.
Lawmakers in states across the country are trying to figure out how to deal with chatbots, too.
- California Gov. Gavin Newsom (D) recently signed legislation requiring chatbot operators to have protocols in place to address content or interactions involving suicide or self-harm, such as referring a user to a crisis hotline.
- The new law requires chatbots to notify minors every three hours that they should "take a break" and that the chatbot is not human.
Congress doesn't have a good track record in regulating tech.
- Some senators recently introduced a bipartisan bill that would ban AI companions for minors, and Congress did pass the TAKE IT DOWN Act this year to require platforms to remove nonconsensual intimate images and criminalize posting such content.
- But lawmakers on the Hill didn't regulate social media or pass meaningful data privacy legislation, and that failure has set the stage for the AI era.
What they're saying: "It's unlikely that Congress would do some really strong, dramatic social media and AI safety legislation," said Danny Weiss, chief advocacy officer of Common Sense Media.
- "So the idea that you do something sort of mediocre or weak, and then preempt the states is a nonstarter for those of us who work in the space."
The other side: Companies are adjusting their policies on kids' safety, including parental controls, restrictions for minors and pushing for age verification legislation.
What we're watching: While generative AI may be the elephant in the room for lawsuits against addictive algorithms, humanoid robotics may very well be the next area of AI litigation, some observers say.
- Experts are calling for emotional safety assessments and policies that mitigate psychological distress from humanoid robotics, which are being built as companions.
- "We are struggling just to tackle social media, so here's what I would say about robotics — I hope to God I never have to think about it," Marquez-Garrett said.
- "It's still the same technology: AI. Until we know this is safe, why the hell are we going to let some company create a motor vehicle without testing?"
