Illustration: Aïda Amer/Axios
The Trump administration is turning up the heat on one of Big Tech's most important legal protections, as the Justice Department convenes a debate over changing a law that protects platforms from suits over content their users post.
Why it matters: The threat to remove immunity granted by Section 230 of the Communications Decency Act is one of a handful of weapons that Washington is mulling using against Facebook, Google, and other tech giants. Trump administration enthusiasm for revoking or revising the protection could give such proposals a boost in Congress.
Driving the news: The DOJ is convening policy experts for a workshop Wednesday titled “Section 230 — Nurturing Innovation or Fostering Unaccountability?"
- The event features a public morning session focused on whether the law "encourages or discourages" platforms to address online harms like child exploitation.
- A private session in the afternoon will examine content moderation, free speech, online conduct and illicit activity, according to a person familiar with the event.
- Speakers invited to the cl0sed-door portion include Internet Association deputy counsel Elizabeth Banker, former 21st Century Fox lobbyist Rick Lane and former Facebook chief security officer Alex Stamos, according to a list obtained by Axios.
The big picture: Republicans and Democrats in Congress frustrated with how tech companies handle content moderation have been debating changes to 230 to address a range of issues, including child exploitation.
- Senate Judiciary Chairman Lindsey Graham is working on draft legislation that would require tech companies to "earn" 230 protections by following best practices for fighting child exploitation, with the Justice Department in a key role to determine those guidelines.
- "Many companies are not adequately incentivized to be proactive about moderating, do not use best practices, and sometimes act recklessly to the harm suffered by children on their sites," said Yiota Souras, general counsel for the National Center for Missing & Exploited Children, who will speak at the workshop.
Yes, but: Supporters of Section 230 have questioned the motives of the DOJ and proponents of changing the law, including critics from some industries that have clashed with tech — including many in the news media.
- The threat to remove Section 230 protection is being used as a lever by competitors like media companies that want tech platforms to share ad revenue — and also by law enforcement agencies, who want the platforms to build encryption backdoors, said Neil Chilson, a senior research fellow at the Charles Koch Institute, another workshop speaker.
Of note: Attorney General William Barr has been campaigning for tech platforms to build back doors into their encryption systems so that law enforcement can better monitor communications while investigating child exploitation and terrorism, and some critics say he is using the Section 230 debate as a pressure tactic.
- "Granting broad immunity to platforms that take no efforts to mitigate unlawful behavior or, worse, that purposefully blind themselves — and law enforcers — to illegal conduct occurring on, or facilitated by, the online spaces they create, is not consistent with" the law's intent," Barr said in a December speech.
The other side: Sen. Ron Wyden (D-Ore.), a co-author of Section 230, sees the move to change the law as a blow against free speech being promoted by "big legacy companies," as he wrote in the Washington Post.
"Under the guise of getting rid of lies and protecting children, they’re working with the Trump administration and top Republicans to undermine Americans' rights and give the government unprecedented control over online speech."— Sen. Ron Wyden
Flashback: Section 230 was enacted in 1996 as part of a broader online regulation law that was eventually overturned by the Supreme Court, leaving the liability shield as the only remnant still in force.
- An earlier court decision had left a muddy precedent suggesting that the only way online service providers could protect themselves legally was to not moderate their content at all.
- 230's authors wanted to make it clear that online services could actively police their users' content without assuming liability for it.
For the record: Then and now, Section 230 protects platforms from civil suits over user content, but not from criminal prosecution for content that breaks federal law.