
Illustration: Shoshana Gordon/Axios
Artificial intelligence is getting caught up in the debate on the tech industry's treasured liability shield.
Why it matters: The government is wrapping its head around how laws on the books today apply to tech that's advancing at breakneck speed.
Driving the news: The House Energy and Commerce Committee will hold a hearing Wednesday on a draft bill from Chair Cathy McMorris Rodgers and Ranking Member Frank Pallone to sunset Section 230.
- The proposal aims to foster substantive discussions on how to revamp Section 230 by giving a deadline of Dec. 31, 2025, to rework tech's liability shield or face its elimination.
Between the lines: An E&C Committee aide told Axios they're expecting AI to come up at the hearing and that there is broad, bipartisan agreement among lawmakers that generative AI should not be protected by Section 230.
- "The devil's in the details with how we define algorithms or AI, but that's something that we're definitely looking at."
Generative AI bots are more likely to be considered by some experts as content creators themselves, rather than the hosts, stripping them of the liability protections.
- Companies could get into trouble in cases in which chatbots "hallucinate," fully making up content on their own rather than pulling from an underlying data set.
- In response to user prompts, chatbots could also come up with responses that are defamatory and illegal.
In the Senate, a roadmap for AI regulation encourages committees to explore whether AI developers and deployers should be accountable if their actions harm consumers or if end users should be accountable if they cause harm.
- The roadmap acknowledges various challenges to making AI companies liable, including how fast the technology is advancing and black-box algorithms that make it hard to know who is behind the harm —developers or deployers.
- A national privacy standard could give legal certainty for AI developers and protections for consumers, the report states, drawing on feedback from an insight forum focused on privacy and liability.
What they're saying: "My reading of Section 230 in a world of generative AI is that tech companies now need new legislation for that technology to thrive," said UNC Tech Policy Center director Matt Perault.
- Perault suggested expanding Section 230 to protect generative AI platforms in most cases.
- That could help immunize AI companies except in cases of hallucinations.
The other side: CMR and Pallone are especially interested in holding tech companies accountable for their impact on kids.
- Kids advocacy group Common Sense Media has reviewed all major AI products, giving OpenAI's ChatGPT a 2/5 for kids' safety and Google's Bard a 3/5.
- "When you add into the conversation the impact of AI on policy debates and on users, it is clearly time for companies to be held accountable for the impact of their products and platforms," tech policy head Amina Fazlullah said.
- Victims' rights attorney Carrie Goldberg, who will testify at the hearing Wednesday, said she is "totally seeing the potential for new harms," pointing to Snap's AI chatbot.
What's next: Section 230 action is likely to happen in the courts while lawmakers try to work out deep divides in an election year where legislative efforts will soon peter out.
- Various circuit courts have ruled against total Section 230 immunity in recent years, creating confusion, and the Supreme Court has left Section 230 untouched after hearing a few cases on it.
- The E&C Committee aide said they're bringing up the court cases to motivate House members and Big Tech to take action: "Is it time for Congress to actually act and make reforms and have Big Tech be part of those conversations rather than just have the courts tell us one way or the other?"
Ashley Gold contributed to this report.
