Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.
At an MIT conference on Wednesday, a journalist pointedly asked Russian President Vladimir Putin whether he would interfere again in U.S. elections. Putin demurred.
What's happening: The world leader was actually a glitchy deepfake. His face was a real-time AI-generated mask that made a performer look like Putin on screen — but because the mask stopped at the forehead, this was Putin with a fresh head of hair.
More than 150 Facebook pages targeting American soldiers and veterans — with a total reach of more than 32 million people — dealt lies and propaganda for years, many while soliciting donations, according to a new investigation from a leading veterans' group.
What's happening: About a third of these pages and groups, mostly controlled from overseas, were taken down after they were reported to Facebook. Others remain up, gathering followers and sowing divisions — and illustrating the failure of social networks and law enforcement to curb online disinformation.
Made-up stories — spoken yarns, art, games, books and films — have always been a diversion reserved for the end of a long day. Now they're becoming alloyed with the rest of our lives, jostling for space with facts.
What's happening: We are surrounded by lifelike synthetic realities — super-engaging parallel worlds, enabled by new technologies, that are coming to define how we understand and interact with each other.
Hostile powers undermining elections. Deepfake video and audio. Bots and trolls, phishing and fake news — plus of course old-fashioned spin and lies.
Why it matters: The sheer volume of assaults on fact and truth is undermining trust not just in politics and government, but also in business, tech, science and health care as well.
Instagram could become a new platform for the sharing of disinformation around the 2020 election because of the way propagandists are relying on images and proxy accounts to create and circulate fake content, leading social intelligence experts tell Axios.
The big picture: "Disinformation is increasingly based on images as opposed to text," said Paul Barrett, the author of an NYU report that's prompted a renewed look at the problem. "Instagram is obviously well-suited for that kind of meme-based activity."
The international industry of disinformation-for-hire services has already reared its head in Western politics, and it's growing fast.
The big picture: There is no U.S. law that prevents candidates, parties or political groups from launching their own disinformation campaigns, either in-house or through a contractor, so long as foreign money isn't involved. It's up to individual candidates to decide their tolerance for the practice.
Despite the sharp alarms being sounded over deepfakes — uncannily realistic AI-generated videos showing real people doing and saying fictional things —security experts believe that the videos ultimately don't offer propagandists much advantage compared to the simpler forms of disinformation they are likely to use.
Why it matters: It’s easy to see how a viral video that appears to show, say, the U.S. president declaring war would cause panic — until, of course, the video was debunked. But deepfakes are not an efficient form of a long-term disinformation campaign.
Since Sen. Ben Sasse (R-Neb.) introduced the first short-lived bill to outlaw malicious deepfakes, a handful of members of Congress and several statehouses, have stabbed at the growing threat.
But, but, but: So far, legal and deepfake experts haven't found much to like in these initial attempts, which they say are too broad, too vague or too weak — meaning that, despite all the hoopla over the technology, we're not much closer to protecting against it.
In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.
Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company's stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.