Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Sarah Grillo/Axios
Deepfakes — realistic AI-generated audio, video, and images — are denounced as a threat to democracy and society by experts and lawmakers. So why are academics producing research that advances them?
Why it matters: Increasingly accessible tools for creating convincing fake videos are a "deadly virus," said Hany Farid, a digital-forensics expert at Dartmouth. "Worldwide, a lot of governments are worried about this phenomenon. I don't think this has been overblown."
Academic researchers are rapidly creating new methods for faking videos, photos, and audio. But they say their goal is not to destroy democracy, but to make new tools for creativity, and help improve other emerging technologies.
- They call the technology "synthetic content generation."
- In its benign form, researchers say, the techniques can be used in filmmaking, dubbing, or virtual reality, and also as training data to improve self-driving cars.
- But they acknowledge that there is serious potential for harm when the technology is misapplied. In a paper published this summer, a pair of law scholars wrote:
"The volume and sophistication of publicly available academic research and commercial services will ensure the steady diffusion of deepfake capacity no matter efforts to safeguard it."— University of Texas professor Bobby Chesney and University of Maryland professor Danielle Citron
Axios reached out to several academics who have published recent research that could be used to create deepfakes. Two responded.
- Caroline Chan, an MIT graduate student who as a UC Berkeley undergrad created a system to simulate body movements in videos, said her research group has also worked on methods of detecting digital forgeries.
- "As a community it is important to us to both advance the state of the art in content creation and be able to separate fake from real content with high confidence," she told Axios.
Aayush Bansal, a PhD candidate at Carnegie Mellon University, developed a technique to replace one person’s face with another's in a video.
- But he said that it had positive as well as negative potential uses: One way of improving systems that aim to detect faked videos is by pursuing new ways of generating them, he said.
- "Since these new approaches essentially work by learning a model of what real data looks like, they are also very good at detecting fake content that was manipulated in any way or created from thin air," said Chan.
Go deeper: