Axios AI+ Government

November 14, 2025
Happy Friday! That means it's time for your weekly deep dive into how governments encourage, regulate and use AI.
- Let us know your top AI policy questions β just click reply to drop us a note.
π Situational awareness: Anthropic is releasing an open-source method to evaluate the political "evenhandedness" of AI chatbots, the company said yesterday per our colleague Ina Fried.
Today's newsletter is 1,490 words, a 5.5-minute read.
1 big thing: Chatbots loom over social media lawsuits
Social media companies are heading to court over kids' mental health β and a New Mexico judge has just demanded Meta produce chatbot records in one key lawsuit, Maria has learned.
Why it matters: Lawsuits targeting addictive algorithms and the mental health impacts on children are just the start of a new legal era where AI could be the wild card.
State of play: At a Nov. 7 hearing, New Mexico Attorney General RaΓΊl Torrez won a fight against Meta to include AI chatbot records in a lawsuit that's headed to trial in February, according to court documents. It's not yet clear what records Meta will need to produce.
- The case began in 2023 over Meta's allegedly addictive algorithms on social media feeds and design choices that fail to protect children from predators online.
- "This is entirely infeasible with just two months until the close of fact discovery and four months until trial," Meta wrote in an Oct. 10 filing against providing the chatbot records.
- In addition to the timing, Meta argues in its filing that chatbots are not relevant and outside the scope of the case.
In California, juries will soon hear cases targeting social media's addictive algorithms and design features, such as endless scroll.
- Chatbots are not an official part of these addiction cases.
- But Laura Marquez-Garrett of the Social Media Victims Law Center said that public consciousness around generative AI is shifting to better understand how companies can cause harm, and that will inevitably have an impact on the addiction litigation.
- "The jury is going to inherently understand these companies are hurting kids through artificial intelligence driven tools," she said.
Yes, but: Don't expect chatbot-related evidence to be submitted in the California lawsuits, or questions for Meta CEO Mark Zuckerberg and other Big Tech CEOs set to testify on that topic. The issues are about specific design features, like autoplaying videos and the infinite scroll.
- While any shift in broader public consciousness has its limits in an ongoing court proceeding, lawyers, lawmakers and regulators are gearing up for the new legal landscape that AI has created.
The big picture: Lawsuits against chatbot makers are now cropping up, focused on mental health impacts on minors and consumers of all ages.
- The Federal Trade Commission has also opened an inquiry into AI chatbot safety, demanding information from seven companies about negative effects of chatbots used by teens and children.
Keep reading below ...
2. Part 2: Chatbot legislation
Lawmakers in states across the country are trying to figure out how to deal with chatbots, too.
- California Gov. Gavin Newsom (D) recently signed legislation requiring chatbot operators to have protocols in place to address content or interactions involving suicide or self-harm, such as referring a user to a crisis hotline.
- The new law requires chatbots to notify minors every three hours that they should "take a break" and that the chatbot is not human.
Congress doesn't have a good track record in regulating tech.
- Some senators recently introduced a bipartisan bill that would ban AI companions for minors, and Congress did pass the TAKE IT DOWN Act this year to require platforms to remove nonconsensual intimate images and criminalize posting such content.
- But lawmakers on the Hill didn't regulate social media or pass meaningful data privacy legislation, and that failure has set the stage for the AI era.
What they're saying: "It's unlikely that Congress would do some really strong, dramatic social media and AI safety legislation," said Danny Weiss, chief advocacy officer of Common Sense Media.
- "So the idea that you do something sort of mediocre or weak, and then preempt the states is a nonstarter for those of us who work in the space."
The other side: Companies are adjusting their policies on kids' safety, including parental controls, restrictions for minors and pushing for age verification legislation.
What we're watching: While generative AI may be the elephant in the room for lawsuits against addictive algorithms, humanoid robotics may very well be the next area of AI litigation, some observers say.
- Experts are calling for emotional safety assessments and policies that mitigate psychological distress from humanoid robotics, which are being built as companions.
- "We are struggling just to tackle social media, so here's what I would say about robotics β I hope to God I never have to think about it," Marquez-Garrett said.
- "It's still the same technology: AI. Until we know this is safe, why the hell are we going to let some company create a motor vehicle without testing?"
3. Exclusive: Maryland taps AI to improve access
Maryland Gov. Wes Moore (D) is partnering with AI companies Anthropic and Percepta to try to improve access to government benefits and housing, according to an announcement shared first with Maria.
Why it matters: Applying for government assistance or housing permits, as well as processing those services, can be difficult and time-consuming.
- Maryland officials are hoping AI will help.
- "Leveraging AI will accelerate our push to fight poverty, turn renters into homeowners, and ensure every Marylander can access essential services like nutrition and financial support, quickly and effectively," Moore said in a statement.
How it works: Maryland will deploy Anthropic's Claude model to power chatbots across agencies to help residents apply for food aid, Medicaid and temporary cash assistance, as well as to identify additional programs that residents might qualify for.
- Caseworkers will use Claude to verify people's eligibility and as a resource for policy guidance in more complicated cases, per the announcement.
- Percepta's technical team will set up its software platform to help Maryland streamline permitting and licensing for housing development.
- The Rockefeller Foundation provided funding support for the partnership.
Context: Maryland has what it calls a "responsible AI policy" for state agencies to provide oversight for how AI systems are used.
Between the lines: Moore, who is running for reelection next year amid presidential speculation, is tapping into AI to advance his party's focus on affordability, which resulted in big wins this year.
Catch up quick: Maryland was already using AI to deploy food assistance programs for kids, and these new tools build on that approach.
What's next: Anthropic and Moore's "innovation team" are working on another data analysis tool to identify where people in local communities are in need of food, child care and other services.
- Maryland is also exploring an AI upskilling pilot with Anthropic for early career professionals.
4. Execs: AI won't replace human creativity
AI amplifies human creativity, but what makes brands and people successful will never be replicated by AI, Hootsuite CEO Irina Novoselsky and Bluesky COO Rose Wang said in an interview with Ashley this week at Web Summit in Lisbon, Portugal.
Why it matters: Leaders of social media companies are embracing AI, but aiming to preserve what made their platforms successful: people's authentic voices and creativity.
What they're saying: "AI amplifies what we can already do β but it doesn't replace creativity. For creators, the big concern is consent: how their data and voice are used," Novoselsky said.
- Responsible AI means knowing when to hold off, she said: "The number-one answer is restraint β stop at the right level of capability before unleashing every model into the wild."
On federal AI regulation, Wang said finding the right balance is key.
- "Regulation and innovation are two partners that have to work together. Regulation helps people feel protected, but sometimes it comes at the cost of small players," Wang said.
Both executives said they're supportive of initiatives involving companies banding together to make rules about how creators' work can be used β and pushing AI companies to follow the rules.
5. How Mozilla is adapting to the AI age
Mozilla wants to do for AI what it did for the web β promote decentralized, open-source systems cautiously, president Mark Surman told Ashley during an interview at Web Summit in Lisbon.
Why it matters: Mozilla's effort underscores the broader global debate over who controls the data and sets the rules in the AI era.
This conversation has been condensed and edited for clarity.
How are you thinking about AI regulation β in Europe with the AI Act, but also in the U.S.?
What you're seeing in the U.S. is what always happens with new technology. We're figuring out how to regulate it, just like we did with automobiles over decades.
- What's really interesting right now is industrial policy and how it intersects with AI.
- Because of global trade tensions, governments are thinking hard about tech sovereignty: how to build up their own AI and digital industries. If you want to know where real policy impact on AI will come from, watch that space.
Thanks to Mackenzie Weinger and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios AI+ Government







