
Illustration: Aïda Amer/Axios
The leader of a powerful House committee is taking aim at websites' liability shield in a new bill that would remove protections if recommended content leads to real-world harm.
The big picture: The bill is the latest attempt to tweak tech's shield after mounting frustrations from both Democrats and Republicans about Facebook, YouTube and Twitter's content moderation practices.
Catch up quick: Section 230 of the Communications Decency Act protects websites from liability over content posted by their users, such as comments posted on Facebook.
Driving the news: House Energy & Commerce Committee Chairman Frank Pallone's (D-NJ) Justice Against Malicious Algorithms Act would remove Section 230 protections for online platforms if:
- They knowingly or recklessly use a personalized recommendation algorithm, and:
- The algorithm recommends content that materially contributes to physical or severe emotional injury.
What they're saying: "The time for self-regulation is over, and this bill holds them accountable," Pallone said in a statement. "Designing personalized algorithms that promote extremism, disinformation, and harmful content is a conscious choice, and platforms should have to answer for it.”
Flashback: The bill, set to be introduced Friday, is also sponsored by communications subcommittee Chairman Mike Doyle (D-Pa.), consumer protection subcommittee chairwoman Jan Schakowsky (D-Ill.) and health subcommittee chairwoman Anna Eshoo (D-Calif.)
- The lawmakers pledged to take action against tech companies during a contentious hearing in March with the CEOs of Facebook, Twitter and Google.