YouTube will bar videos that lie about the mechanics of an election, the company announced in a blog post Monday, but indicated it remains reluctant to crack down more broadly on deceptive political speech, as some critics have demanded.
Why it matters: YouTube's content policies — which are separate from the advertising policies Google outlined in the fall — do not ban political falsehoods at a time when tech platforms are under fire to limit misinformation about candidates and elections.
Democratic megadonor George Soros ripped into Mark Zuckerberg and Facebook's decision to not fact check 2020 political ads in a Friday morning New York Times op-ed.
"I believe that Mr. Trump and Facebook's chief executive, Mark Zuckerberg, realize their interests are aligned — the president's in winning elections, Mr. Zuckerberg's in making money ... Facebook's decision not to require fact-checking for political candidates' advertising in 2020 has flung open the door for false, manipulated, extreme and incendiary statements."— George Soros
Facebook said Thursday it will take further steps to ensure its social network is home to accurate information about the fast-spreading novel coronavirus.
With just weeks to the Iowa caucuses, social media platforms have finalized their rules governing political speech — and fired a starting pistol for political strategists to find ways to exploit them from now till Election Day.
Why it matters: "One opportunity that has arisen from all these changes is how people are trying to get around them," says Keegan Goudiss, director of digital advertising for Bernie Sanders' 2016 campaign and now a partner at the progressive digital firm Revolution Messaging.
Facebook, TikTok and Reddit all updated their policies on misinformation this week, suggesting that tech platforms are feeling increased pressure to stop manipulation attempts ahead of the 2020 elections.
Why it matters: This is the first time that several social media giants are taking a hard line specifically on banning deepfake content — typically video or audio that's manipulated using artificial intelligence (AI) or machine learning to intentionally deceive users.
A video selectively edited to frame one of Joe Biden's stump speeches as racist was shared by GOP strategists and a former speaker of the Missouri House, the New York Times reports, citing data from misinformation tracker VineSight.
Why it matters: Sharing misleading information via social media to incite anger toward presidential candidates is easy — and it works.
In a clip from a stunning new AI-manipulated video, President Nixon delivers a somber speech he never gave in real life, appearing to eulogize American astronauts left on the moon to die.
Why it matters: The video simultaneously shows the dangerous power of deepfake technology that can put words into the mouths of powerful leaders — and its potential to expand the boundaries of art.
Ad targeting is how Facebook, Google and other online giants won the internet. It's also key to understanding why these companies are being held responsible for warping elections and undermining democracy.
The big picture: Critics and tech companies are increasingly considering whether limiting targeting of political ads might be one way out of the misinformation maze.
Technology could erode the evidentiary value of video and audio so that we see them more like drawings or paintings — subjective takes on reality rather than factual records.
What's happening: That's one warning from a small group of philosophers who are studying a new threat to the mechanisms we use to communicate and to try to convince one another.
Hoping to stem a forecast rising tide of faked video, Adobe, Twitter and the New York Times are proposing a new industry effort designed to make clear who created a photo or video and what changes have been made.
Why it matters: With editing tools and artificial intelligence rapidly improving, it will soon be possible to make convincing videos showing anyone saying anything and photos of things that never happened.