Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Lazaro Gamio/Axios
The U.K. announced last Monday a sweeping plan to prevent the spread of harmful online content — part of a global trend of new content regulations targeting material designed to polarize and mislead.
The big picture: The British proposal, which comes on the heels of new measures in Australia and Singapore, would create a regulator empowered to punish social media platforms that fail to quickly remove harmful material, including disinformation. But these approaches — which focus on content rather than problematic behavior — have concerning implications for free expression.
Where it stands:
- Singapore introduced draft legislation two weeks ago that would allow the government to force corrections into online content that it deems false. Given its history of silencing criticism, rights groups are justifiably concerned.
- Also two weeks ago, Australia passed a law that threatens fines and even jail time for social media companies and their executives that fail to quickly remove violent posts. That too is fraught, since it incentivizes companies to take a broad approach that could imperil legitimate speech.
- Britain's proposal will soon enter a 3-month comment period, during which revisions are likely.
Between the lines: If platforms work to root out accounts that engage in deceptive behavior, they can limit the spread of weaponized misinformation without policing content.
- Disinformation operations deploy coordinated networks of fake personas and automated accounts that manipulate algorithms and flood the information space — tactics that have little to do with content.
- Focusing on tools and tactics enables social media companies to identify patterns of behavior that can prevent disinformation operations in the future.
The bottom line: Policing online content could play into the hands of the very authoritarian regimes that deploy disinformation campaigns, since they themselves want to restrict expression.
Jessica Brandt is a fellow at the German Marshall Fund and the head of policy and research for its Alliance for Securing Democracy.