Grok's explicit images reveal AI's legal ambiguities
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Grok's continued posting of nonconsensual images on X highlights a key unsettled legal issue around artificial intelligence: just who — if anyone — is liable for harm caused by a chatbot's outputs.
Why it matters: Businesses, individuals and society are increasingly reliant on AI, but there's little clarity over who bears responsibility when things go wrong.
The big picture: AI chatbots have gained massive usage around the world despite a number of legal uncertainties.
- The initial debate focused largely on the legality of how the systems were trained using copyrighted data. Early battles have largely gone the tech companies' way, with a number of courts ruling that training qualifies as "fair use."
- More recently, a number of lawsuits have centered on whether companies are liable when their chatbots give dangerous advice. ChatGPT and CharacterAI, for example, are both facing lawsuits for allegedly pushing people toward suicide and, in at least one case, murder.
- Bubbling under the surface has been the issue of whether the tech companies are liable when a chatbot harms their reputation — an issue that Grok's depiction of people in bikinis and sexual positions has brought to the forefront.
Between the lines: Many chatbots have the potential to create deepfakes, but Grok stands out from its peers in two important ways.
- First, it openly touts its willingness to undertake conversations and tasks that other chatbots would decline, such as creating sexualized images.
- Second, conversations with Grok on X are public, including both the user's request and the chatbot's response. Grok's replies feed is filled with examples of users asking it to replace a subject's clothing with skimpy attire and the chatbot complying.
- Grok is not only putting people in bikinis, but also sharing those images with the world.
- The Grok example "is really horrific because it kind of puts a black eye on the entire AI landscape," New York-based attorney James Rubinowitz tells Axios.
Zoom in: As for the company's legal protection, much of the discussion has focused on to what degree chatbot makers are protected by Section 230 of the Communications Decency Act. The oft-cited text gives tech companies broad (but not unlimited) protection from liability for content produced by others.
- Many legal scholars argue Section 230 shouldn't protect what a chatbot spits out since it is the tech companies producing the speech.
- "Section 230 will not protect these LLMs," Rubinowitz, who teaches a law school class on AI in litigation, tells Axios. "When we look at what's going on now, it's very clear that the AI companies are not just a library or repository of information."
- In the Grok case, Rubinowitz says that AI engines are the creators of content: "What's really going on here is the AI is the author and creator of this content, of language that can become defamatory or libelous." One debate that may crop up is whether an AI-generated image counts as speech, Rubinowitz says.
- "It would be immune under Section 230 if the image was created by a third party, but the fact that people are using their AI tools to create these images ... Section 230 immunity does not automatically apply to X," Ari Waldman, a law professor at the University of California at Irvine, tells Axios.
Yes, but: Grok is showing no signs of slowing down. Executives have been touting the traffic that has accompanied Grok's permissiveness, with X product chief Nikita Bier noting on Monday that X has seen record levels of engagement over the past week.
- On Tuesday, Grok creator xAI announced it has raised a higher-than-expected $20 billion in new funding. Blue-chip investors including Fidelity, Cisco and Nvidia were apparently willing to have their names attached to Musk's AI company despite the controversy and potential legal liabilities.
What to watch: Keep an eye on the courts and how companies argue current law protects their products, as well as how a key U.S. law aimed at preventing the proliferation of such content circulating on X — the TAKE IT DOWN Act — is eventually enforced. There are also various state laws on nonconsensual deepfakes, and companies are coming into compliance with the EU AI Act.
- "Eventually a higher court is going to say, 'No 230 for you. The LLM is the one creating this speech,'" Rubinowitz says.
- Waldman cited a Ninth Circuit court opinion from last August which ruled Section 230 protections for X did not go as far as the company argued as a harbinger for cases to come.
- The White House and the Federal Trade Commission did not respond repeated requests for comment.

