Washington to require AI labels and set limits on chatbots
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Washington is policing artificial intelligence, with new laws signed by Gov. Bob Ferguson setting guardrails for companion chatbots and requiring disclosure when images are made or edited by AI.
Why it matters: The new Washington laws highlight how policymakers are responding as AI use rapidly expands in business and everyday life.
What's inside: House Bill 1170 will require large AI companies to identify when images, video, or audio are altered or created using their systems "to the extent commercially and technically reasonable."
- A watermark imposed on an image, or data embedded in the digital file, could meet the mandatory disclosure requirement.
- Companies must also offer tools to help detect AI-modified content.
House Bill 2225 takes aim at companion chatbots — AI systems designed to mimic human conversation and build ongoing relationships with users.
- Companies that run them must tell users upfront that they're talking to a machine and must remind them every three hours.
- If the user is a minor, those reminders must come every hour, and the company must prevent the chatbot from using manipulative tactics to deepen emotional attachment.
- Companies must have a plan for flagging users who express suicidal thoughts or self-harm and directing them to a crisis line or other resources.
A separate law allows people to sue over unauthorized AI-generated uses of their voice or image, including so-called "deepfakes."
What they're saying: The chatbot regulations are partly in response to stories of teens "turning to these chatbots in times of distress before quite tragically ultimately ending their lives," Ferguson said at last week's bill signing ceremony.
- When it comes to digital media, the governor added, it is important to "know what is human-made and what is machine-generated."
- "By making it clear when AI generates media, Washingtonians are better protected against confusion, deception and misinformation," he said.
The other side: Some critics object to the chatbot law allowing private individuals to sue over violations, arguing enforcement should be left to state regulators.
- "Allowing these standards to be defined through private lawsuits, rather than through agency rulemaking or coordinated enforcement, may create uncertainty for responsible actors seeking to comply in good faith," the Washington Liability Reform Coalition wrote in a letter asking Ferguson to veto part of the legislation.
- Ferguson ultimately signed the chatbot bill without changes.
What's next: The AI disclosure law will take effect in February 2027, while the chatbot regulation law will take effect in January.
- The anti-deepfake law targeting AI-generated impersonations will take effect in June.
