Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Axios technology correspondent Ina Fried (left) and Hany Farid, professor at the University of California, Berkeley. Photo: Axios
The technology to produce fake video and audio has become sophisticated enough to make doctored or wholly fabricated images and sound impossible for the public to detect, Hany Farid, professor at the University of California, Berkeley, Electrical Engineering and Computer Sciences & School of Information, said Wednesday at an Axios virtual event.
The big picture: Deepfakes, or computer-synthesized images, audio or video, have caused experts to worry about Silicon Valley's ability to meet the challenge of tracking and stopping these AI-generated clips once they become widespread.
What he's saying: "[I]f we do not start thinking about this on many levels, I fear that these are existential threats to democracies and societies," he said.
- "We’ve seen misinformation lead to horrific violence in Myanmar, Ethiopia, the Philippines, Sri Lanka, India, Brazil. We have seen misinformation disrupt global democratic elections around the world. These aren’t hypothetical threats of what will happen if we do not get a handle on mis- and disinformation online," Farid told Axios' Ina Fried.
The state of play: Farid also called for social media platforms to have a better handle on misinformation and to stop hiding behind "the line of 'I don’t want to be the arbiter of truth.' It is nonsense."
- Social media platforms have generally tried to tackle misinformation with content moderators. Another solution could be changing their algorithm recommendations, Farid, said.
- "Algorithms are amplifying the most divisive, the most hateful, the most conspiratorial, the most outrageous, because that engages people and that maximizes profit," he added.
Watch the event, Axios' Trust and Transparency, online at Axios.com.