Oct 12, 2019

The deepfake threat to evidence

Illustration: Rebecca Zisser/Axios

As deepfakes become more convincing and people are increasingly aware of them, the realistic AI-generated videos, images and audio threaten to disrupt crucial evidence at the center of the legal system.

Why it matters: Leaning on key videos in a court case — like a smartphone recording of a police shooting, for example — could become more difficult if jurors are more suspicious of them by default, or if lawyers call them into question by raising the possibility that they are deepfakes.

What's happening: Elected officials, experts and the press have been warning about the potential fallout for business or elections from deepfakes. But apart from a few high-profile examples, the tech so far has been used almost exclusively for porn, according to a landmark new report from Deeptrace Labs.

  • Plus, when President Trump and his supporters throw around accusations of "fake news" to discredit information that they don't like, it can deepen the atmosphere of distrust.
  • All this could lead jurors or attorneys to falsely assume that a real video is faked, says Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford's Center for Internet and Society.

"This is dangerous in the courtroom context because the ultimate goal of the courts is to seek out truth," says Pfefferkorn, who recently wrote an article about deepfakes in the courtroom for the Washington State Bar magazine.

  • "My fear is that the cultural worry could be weaponized to discredit [videos] and lead jurors to discount evidence that is authentic," she tells Axios.
  • If a video's authenticity comes into question, the burden shifts to the side that introduced it to prove it's not fake — which can be expensive and take a long time.

Already, people accused of possessing child porn often claim that it's computer-generated, says Hany Farid, a digital forensics expert at UC Berkeley. "I expect that in this and other realms, the rise of AI-synthesized content will increase the likelihood and efficacy of those claiming that real content is fake."

Go deeper

Philosophers tackle deepfakes

Photo illustration: Eniola Odetunde. Photo via Francois G. Durand/Getty Images

Technology could erode the evidentiary value of video and audio so that we see them more like drawings or paintings — subjective takes on reality rather than factual records.

What's happening: That's one warning from a small group of philosophers who are studying a new threat to the mechanisms we use to communicate and to try to convince one another.

Go deeperArrowNov 9, 2019

The roots of the deepfake threat

Illustration: Aïda Amer/Axios

The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.

Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.

Go deeperArrowOct 19, 2019

Adobe, Twitter, NYT launch effort to fight deepfakes

Illustration: Sarah Grillo/Axios

Hoping to stem a forecast rising tide of faked video, Adobe, Twitter and the New York Times are proposing a new industry effort designed to make clear who created a photo or video and what changes have been made.

Why it matters: With editing tools and artificial intelligence rapidly improving, it will soon be possible to make convincing videos showing anyone saying anything and photos of things that never happened.

Go deeperArrowNov 4, 2019