
Illustration: Aïda Amer/Axios
The outcry over Congress' latest proposal to regulate tech companies' algorithms shows how difficult it is for lawmakers and platforms alike to deal with online content moderation.
Why it matters: The new bill is backed by the leadership of a powerful committee with jurisdiction over the issue, giving it more momentum than some previous legislative attempts to revamp online platforms' legal protections.
What's happening: Scholars, organizations backed by tech companies, and a digital rights group weighed in Thursday to oppose the Justice Against Malicious Algorithms Act, sponsored by House Energy & Commerce Committee Chairman Frank Pallone (D-N.J.).
- "This bill is well-intentioned, but it's a total mess," Fight for the Future director Evan Greer said.
How it works: The bill would modify Section 230 of the Communications Decency Act, which protects websites from liability over content posted by their users.
- Under the bill, an online platform would lose protection under Section 230 if it knowingly or recklessly uses a personalized algorithm to recommend content that materially contributes to physical or severe emotional injury.
The fine print: The bill does not apply to search features or algorithms that don't rely on personal information.
- It also doesn't apply to internet infrastructure services, such as web hosting, or to online platforms with less than 5 million unique monthly visitors or users.
What they're saying: The bill, backed by Democratic leaders in the committee, is meant to hold platforms accountable for recommending content that leads to real-world harms.
- "When you've got platforms that have recommended accounts to teenagers that actually encourage eating disorders, that harm their mental health, I think ordinary Americans say, 'Time is up,'" Rep. Jan Schakowsky (D-Ill.), chairwoman of the House Energy & Commerce consumer protection subcommittee, told Axios.
- "They are not going to be able to go to the courts, as they have in the past and say, 'No, we're immune. No, can't touch us.'"
Yes, but: Critics say the bill would have unintended consequences as well — and likely wouldn't even achieve its stated goal.
- "Exempting personalized and algorithmically amplified content from Section 230 protections wouldn’t prevent platforms from using algorithms to pick and choose what users see," Greer said. "It would just incentivize those platforms to show users more 'sanitized,' corporate content that has been vetted by lawyers as non-controversial."
- K. Dane Snowden, president of tech industry trade group the Internet Association, said the bill would undermine the work social media companies are doing to address content moderation problems.
- "So we're back in the game of hiring fewer engineers and hiring more lawyers," Snowden told Axios. "That's the wrong direction for how we want to take this."
- Mary Anne Franks, a professor at the University of Miami School of Law, said while the bill might lead large tech companies to reduce their reliance on personalized algorithms, it’s not clear that would reduce harmful or dangerous content.
- “Companies could and probably would simply rely more on group-targeted algorithms to recommend content,” Franks said. “Algorithms that are tailored for vulnerable or volatile groups (for example, teenage girls or white supremacists) seem equally or more concerning as personalized algorithms.”
The other side: The bill could incentivize larger platforms to try to avoid amplification that causes harm, Karen Kornbluh, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund, told Axios.
- "Inaction is not tenable and this is a very important step forward," Kornbluh said.
- Roddy Lindsay, a former Facebook data scientist who has called for changes to Section 230, supports the legislation because he thinks it will force companies to give users more control over their online experience.
- "Companies would have to balance the tradeoffs between continuing to use these engagement-optimized algorithms where there might be some additional legal risk, or go to a system that is more deterministic and gives users control of the content they are looking at," Lindsay told Axios. "It changes the incentive profile in a way that pushes the incentives more toward user control and away from AI control."