OpenAI, Meta, Google, xAI face chatbot scrutiny from FTC
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
The Federal Trade Commission opened an inquiry into AI chatbot safety on Thursday, demanding information from seven companies about negative effects of chatbots used by teens and children.
Why it matters: The probe highlights the growing tension between the U.S. push for AI leadership and the risks of exposing kids to untested technologies.
Driving the news: The seven companies include OpenAI, Meta — and its Instagram unit — Alphabet (Google), xAI, Snap and Character.AI.
Between the lines: The FTC says it wants to understand what safety efforts these companies have taken, to evaluate how children and teens are able to interact with these tools.
- The FTC said it aims to limit the potential negative effects. and to apprise users and parents of their risks.
- These chatbots often mimic human-like behavior, which could lead younger users to form emotional bonds, increasing those risks, per the FTC.
Catch up quick: AI chatbot companions are at the center of a handful of lawsuits against OpenAI, Google and Character.AI.
- Parents of teenagers are suing the companies, aiming to hold the AI makers responsible for their children's suicides.
Zoom in: Companion apps are a lucrative use of generative AI because of their ability to grab and hold users' attention.
- The FTC seeks to understand how these companies monetize user engagement and disclose its data collection practices.
Between the lines: The probe lands as AI tools spread rapidly in schools and get a boost from federal initiatives.
- "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," FTC chairman Andrew Ferguson said in a statement.
The other side: OpenAI, Meta and Character.AI have announced initiatives to add parental controls and other teen safety features to their tools.
- "We recognize the FTC has open questions and concerns, and we're committed to engaging constructively and responding to them directly," an OpenAI spokesperson told Axios.
- "We look forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology," a spokesperson from Character.AI said.
💭 Thought bubble, from Axios tech policy reporter Ashley Gold: The inquiry, which allows the FTC to get non-public information from major tech companies, is a rare rebuke from the Trump administration into the safety implications of AI.
- It will force those companies to divulge the ways they view children's safety using their chatbots.
- The investigation is a sign of the administration taking recent stories of teens committing suicide after speaking to AI chatbots seriously, but unless the FTC decides to go after any specific company behavior beyond the inquiry, not much may change.
Go deeper: Tech firms, states look to rein in AI chatbots' mental health advice
