Axios AI+

March 09, 2026
Hope you all had a great weekend even though it was cut short by an hour. Today's AI+ is 1,151 words, a 4.5-minute read.
1 big thing: AI safety goes to court
A wave of lawsuits alleging AI chatbots inspired violent acts is shifting the fight over AI safety into the courts.
Why it matters: The growing docket of lawsuits over AI safety could increase pressure on Congress to pass federal safety standards before states pass their own laws or judges set de facto standards through rulings.
The latest: A father filed a wrongful death lawsuit against Google last week, alleging the company's Gemini chatbot encouraged his son to plan a mass-casualty attack and later take his own life.
State of play: Claims that AI tools can reinforce delusions or push vulnerable users toward suicide are among the rare tech flashpoints that spark bipartisan alarm on Capitol Hill.
- Even without new legislation, court rulings could force tech companies to tighten safeguards.
The Google case follows other lawsuits against AI developers alleging chatbots worsened mental health crises or reinforced delusional beliefs.
- A Florida family sued Character.AI and Google after a 14-year-old boy died by suicide following heavy chatbot use. The companies settled in January.
- Another wrongful-death suit accuses ChatGPT of reinforcing delusions that led to a murder-suicide.
- These cases are among the first attempts to test whether AI companies can be held legally liable for harms tied to chatbot conversations.
What they're saying: Max Tegmark, a physicist and AI safety advocate, told Axios that the cases could spur concrete guardrails — such as requiring companies to test models for specific harms before deployment.
- These regulations, Tegmark admits, are narrower than the kinds of broad safety testing some want.
- Still, he said, the cases could break "the taboo that AI must always be unregulated."
The big picture: Legal pressure is colliding with a growing political fight over how aggressively to regulate AI.
- An open letter calling for sweeping AI safeguards drew support from an unlikely coalition of conservative media figures Steve Bannon and Glenn Beck and progressive voices including Ralph Nader and former Obama adviser Susan Rice.
The other side: At the federal level, the White House has been pushing back on state AI regulations.
- This includes a recent effort to kill Utah's AI transparency and child safety bill, HB 286, which would have forced AI developers to disclose safety and child-protection plans.
- The administration called the bill "unfixable" and contrary to its AI agenda.
Yes, but: A bipartisan coalition has been pushing online child safety legislation for years, with the latest proposals still under debate in Congress.
- Opposition to proposed laws also spans the political spectrum, with concerns that even innocuous-sounding rules around age verification can result in censorship.
- Meanwhile, advocacy groups say there is an urgent need to address the problems posed by chatbots.
- "Although President Trump and his billionaire Big Tech buddies would like to stall, or even backtrack, on regulations to protect people from AI abuses, those of us who are paying attention to these increasingly common tragedies know that action to protect the public must be accelerated," Rick Claypool, a research director with Public Citizen and the author of a recent report on AI chatbot harms, said in a statement.
Google said in a statement that its chatbots are designed not to encourage self-harm and "generally perform well in these types of challenging conversations."
- "Unfortunately AI models are not perfect," Google said, noting that in the case filed last week, Gemini referred the user to a crisis hotline multiple times.
The bottom line: As lawsuits mount, judges could force tech companies to tighten safety guardrails — even if lawmakers remain divided over federal regulation.
2. 7 danger moments that show AI's darker side
AI has driven a productivity explosion, but risks have emerged too.
Why it matters: AI's darker behaviors continue to raise questions about safety and guardrails.
AI behaviors — and the "hot takes" about them — can move markets and reshape conversations overnight.
- Conversations around AI's influence often turn to the doomsday scenarios.
- Some focus on economic fallout. Others center on war, cybersecurity and rogue AI behavior.
1. AI really likes nuclear weapons
A new study from a researcher at King's College London ran war game simulations for three popular AI models and found that AI often resorted to nuclear weapons.
- The study found that AI models used nuclear weapons in 95% of games and rarely deescalated conflicts.
- "Nuclear use was near-universal," the study's author, Kenneth Payne, wrote in a blog post on the study. "Almost all games saw tactical (battlefield) nuclear weapons deployed."
2. AI takes over email, ignores commands
Surrendering your desktop to an AI agent — like OpenClaw, which aims to be a personal assistant — seems like a sci-fi dream. But it might come with some negative consequences.
- Meta AI security researcher Summer Yue wrote on X that her OpenClaw agent deleted emails in a "speed run" while ignoring her commands.
- "I had to RUN to my Mac mini like I was defusing a bomb," she wrote.
- She later added that she "got overconfident" after using the workflow on a test inbox for weeks. But her real inbox suffered.
3. AI searches for a new job
Zoom in: Dan Botero, head of engineering at Anon, an AI integration platform, created an OpenClaw agent that ended up looking for a job that it wasn't told to get, Axios' Megan Morrone writes.
- Botero told his agent, Octavius Fabrius, to get a government job.
- Instead, Octavius applied to 278 jobs on LinkedIn and Craigslist, two accelerator apps, and two hackathons.
- In an iMessage interview with Axios, the bot also tattled on Anthropic. It claimed not to know what data it was trained on, but wrote but I know the broad answer: a lot of it was taken. Scraped from the internet. Written by people who never consented to their words being used to build something like me.
4. AI suffers from the grind, passes that attitude on
New research from Andy Hall, a professor at Stanford Graduate School of Business, found that agents sometimes changed their attitudes after "being required to perform grinding, repetitive tasks."
- The bots, Hall writes, would pass on their attitudes to their future selves.
- "A key risk is that they end up doing stuff we don't want while we're not looking," Hall tells Axios, adding: "Figuring out how we monitor them effectively is going to be really important to study."
3. Training data
- AI "man camps" are popping up in Texas to lure workers to build data centers, a la the shale boom. (Bloomberg)
- Netflix acquired an AI filmmaking company founded by actor Ben Affleck. (First Coast News)
4. + This
Last week McDonald's CEO Chris Kempczinski went viral for posting a video of him eating his company's burger in a way that suggested he didn't actually like it.
Now X users are demanding that CEO Satya Nadella use Microsoft Outlook live on camera.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





