How global efforts to limit disinformation could infringe speech
The big picture: The British proposal, which comes on the heels of new measures in Australia and Singapore, would create a regulator empowered to punish social media platforms that fail to quickly remove harmful material, including disinformation. But these approaches — which focus on content rather than problematic behavior — have concerning implications for free expression.
Where it stands:
- Singapore introduced draft legislation two weeks ago that would allow the government to force corrections into online content that it deems false. Given its history of silencing criticism, rights groups are justifiably concerned.
- Also two weeks ago, Australia passed a law that threatens fines and even jail time for social media companies and their executives that fail to quickly remove violent posts. That too is fraught, since it incentivizes companies to take a broad approach that could imperil legitimate speech.
- Britain's proposal will soon enter a 3-month comment period, during which revisions are likely.
Between the lines: If platforms work to root out accounts that engage in deceptive behavior, they can limit the spread of weaponized misinformation without policing content.
- Disinformation operations deploy coordinated networks of fake personas and automated accounts that manipulate algorithms and flood the information space — tactics that have little to do with content.
- Focusing on tools and tactics enables social media companies to identify patterns of behavior that can prevent disinformation operations in the future.
The bottom line: Policing online content could play into the hands of the very authoritarian regimes that deploy disinformation campaigns, since they themselves want to restrict expression.
Jessica Brandt is a fellow at the German Marshall Fund and the head of policy and research for its Alliance for Securing Democracy.