A shaky first pass at criminalizing deepfakes
Since Sen. Ben Sasse (R-Neb.) introduced the first short-lived bill to outlaw malicious deepfakes, a handful of members of Congress and several statehouses, have stabbed at the growing threat.
But, but, but: So far, legal and deepfake experts haven't found much to like in these initial attempts, which they say are too broad, too vague or too weak — meaning that, despite all the hoopla over the technology, we're not much closer to protecting against it.
The big picture: Deepfakes pose a threat to elections and businesses, and experts worry that a convincing, well-timed fake video could set off riots or even armed conflict. But as we've been chronicling, campaigns and firms are largely unready for the threat.
What's happening: Congress and state legislatures are trying to head off deepfakes instead with laws that would punish creating or distributing certain types.
- In Virginia, sharing nude photos or videos without the subject's permission is illegal — and as of this month, a new law now applies to digitally altered images and videos, too.
- In New York, a similar bill has failed to get off the ground for over a year, facing pushback from Hollywood studios that say it's overly broad.
- In Texas and California, recent bills would criminalize the creation of AI-generated videos aimed at knocking an election off course — but only within 30 or 60 days of the poll date, depending on the state.
In Congress, the DEEPFAKES Accountability Act would require people to put watermarks on AI-generated or altered content, and allow people who were spoofed to sue the creator of a deepfake. A second bill would require the Department of Homeland Security to produce an annual report about deepfakes and recommend new regulations.
None of these quite hits the mark. "The bills I've seen so far have been crafted really quickly," says Hany Farid, a digital forensics expert at Berkeley. "I don't think they've been thoughtful about the technology and the consequences."
Between the lines: It's really hard to head off the deepfake threat with laws, because by the time a malicious fake goes viral — whether it's aimed at a candidate or it's nonconsensual porn — it's too late to contain the damage.
- Bills requiring disclosure won't contain the harm of a fast-spreading fake, says Danielle Citron, a law professor at Boston University. "We believe deepfake sex videos though they are fake," she says. "People won’t see the watermark."
- And proposals focused on deepfakes' political impact won't protect the larger group most at risk of harm — women placed in deepfake pornography without their consent, says Mary Anne Franks, a University of Miami law professor.
What's next: Citron and Franks are working with a House member — they won't say who — to draft a bill they say is better suited to the problem. Rather than specific uses of deepfakes, it takes on digital impersonation broadly, Citron says.
Go deeper: Congress' flawed proposals to combat deepfakes (Slate)