Expert says detecting deepfakes almost impossible
The technology to produce fake video and audio has become sophisticated enough to make doctored or wholly fabricated images and sound impossible for the public to detect, Hany Farid, professor at the University of California, Berkeley, Electrical Engineering and Computer Sciences & School of Information, said Wednesday at an Axios virtual event.
The big picture: Deepfakes, or computer-synthesized images, audio or video, have caused experts to worry about Silicon Valley's ability to meet the challenge of tracking and stopping these AI-generated clips once they become widespread.
What he's saying: "[I]f we do not start thinking about this on many levels, I fear that these are existential threats to democracies and societies," he said.
- "We’ve seen misinformation lead to horrific violence in Myanmar, Ethiopia, the Philippines, Sri Lanka, India, Brazil. We have seen misinformation disrupt global democratic elections around the world. These aren’t hypothetical threats of what will happen if we do not get a handle on mis- and disinformation online," Farid told Axios' Ina Fried.
The state of play: Farid also called for social media platforms to have a better handle on misinformation and to stop hiding behind "the line of 'I don’t want to be the arbiter of truth.' It is nonsense."
- Social media platforms have generally tried to tackle misinformation with content moderators. Another solution could be changing their algorithm recommendations, Farid, said.
- "Algorithms are amplifying the most divisive, the most hateful, the most conspiratorial, the most outrageous, because that engages people and that maximizes profit," he added.