Over the past week, Facebook and Twitter have codified a dual-class system for free speech: one set of rules for politicians or "world leaders," another for the rest of us.
Why it matters: Social media platforms are privately owned spaces that have absorbed a huge chunk of our public sphere, and the rules they're now hashing out will shape the information climate around elections for years to come.
The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.
Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.
Disinformation campaigns used to consist of trolls and bots orchestrated and manipulated to produce a desired result. Increasingly, though, these campaigns are able to find willing human participants to amplify their messages and even generate new ones on their own.
The big picture: It's as if they're switching from employees to volunteers — and from participants who are in on the game to those who actually believe the disinformational payload they're delivering.
As deepfakes become more convincing and people are increasingly aware of them, the realistic AI-generated videos, images and audio threaten to disrupt crucial evidence at the center of the legal system.
Why it matters: Leaning on key videos in a court case — like a smartphone recording of a police shooting, for example — could become more difficult if jurors are more suspicious of them by default, or if lawyers call them into question by raising the possibility that they are deepfakes.
A growing industry of commercial disinformation services based in countries like Russia and the Philippines have the language skills, local contacts and cultural background to influence an English language conversation half a world away.
Driving the news: A new report from the security firm Recorded Future documents two campaigns that it paid Russian-speaking, dark web propagandists-for-hire to run.
Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.
At an MIT conference on Wednesday, a journalist pointedly asked Russian President Vladimir Putin whether he would interfere again in U.S. elections. Putin demurred.
What's happening: The world leader was actually a glitchy deepfake. His face was a real-time AI-generated mask that made a performer look like Putin on screen — but because the mask stopped at the forehead, this was Putin with a fresh head of hair.
More than 150 Facebook pages targeting American soldiers and veterans — with a total reach of more than 32 million people — dealt lies and propaganda for years, many while soliciting donations, according to a new investigation from a leading veterans' group.
What's happening: About a third of these pages and groups, mostly controlled from overseas, were taken down after they were reported to Facebook. Others remain up, gathering followers and sowing divisions — and illustrating the failure of social networks and law enforcement to curb online disinformation.
Made-up stories — spoken yarns, art, games, books and films — have always been a diversion reserved for the end of a long day. Now they're becoming alloyed with the rest of our lives, jostling for space with facts.
What's happening: We are surrounded by lifelike synthetic realities — super-engaging parallel worlds, enabled by new technologies, that are coming to define how we understand and interact with each other.
Hostile powers undermining elections. Deepfake video and audio. Bots and trolls, phishing and fake news — plus of course old-fashioned spin and lies.
Why it matters: The sheer volume of assaults on fact and truth is undermining trust not just in politics and government, but also in business, tech, science and health care as well.