After initially indicating it would not take action against campaign ads from President Trump that encouraged people to "take the Official 2020 Congressional District Census today," Facebook said Thursday it would take the messages down.
Why it matters: Facebook has generally subjected political advertising to few rules, but had said it would take a tough stand against any posts designed to mislead people about the census.
The chair of the House Judiciary antitrust subcommittee is preparing a bill that would remove liability protections from tech platforms that don't take down false political ads, Bloomberg Law reported Monday.
The big picture: Facebook's policy of not fact-checking political ads has angered Democrats, and tinkering with Section 230 of the Communications Decency Act, which immunizes internet platforms from lawsuits over user-posted material, has become an increasingly popular threat for lawmakers looking to bring Big Tech to heel.
Advances in digital technology are likely to erode trust and harm democracy around the world between now and 2030, according to a plurality of tech experts surveyed for a new Pew Research report.
Why it matters: Online misinformation is already causing a mix of actual harm and widespread fears, and advances like deepfakes are likely to intensify the challenges citizens face.
The Trump campaign, borrowing tactics from dictators and demagogues abroad, is poised to spend $1 billion on "what could be the most extensive disinformation campaign in U.S. history" to sway the 2020 election, McKay Coppins writes in the Atlantic.
Why it matters: Coppins offers the prospect of an election "shaped by coordinated bot attacks, Potemkin local-news sites, micro-targeted fearmongering, and anonymous mass texting."
Twitter on Tuesday announced a new policy aimed at discouraging the spread of deepfakes and other manipulated media, but the service will only ban content that threatens people's safety, rights or privacy.
Why it matters: Tech platforms are under pressure to stanch the flow of political misinformation, including faked videos and imagery. Twitter's approach, which covers a wide range of material but sets narrow criteria for deletion, is unlikely to satisfy critics or politicians like Joe Biden and Nancy Pelosi — who have both slammed platforms for allowing manipulated videos of them to spread.
YouTube will bar videos that lie about the mechanics of an election, the company announced in a blog post Monday, but indicated it remains reluctant to crack down more broadly on deceptive political speech, as some critics have demanded.
Why it matters: YouTube's content policies — which are separate from the advertising policies Google outlined in the fall — do not ban political falsehoods at a time when tech platforms are under fire to limit misinformation about candidates and elections.
Democratic megadonor George Soros ripped into Mark Zuckerberg and Facebook's decision to not fact check 2020 political ads in a Friday morning New York Times op-ed.
"I believe that Mr. Trump and Facebook's chief executive, Mark Zuckerberg, realize their interests are aligned — the president's in winning elections, Mr. Zuckerberg's in making money ... Facebook's decision not to require fact-checking for political candidates' advertising in 2020 has flung open the door for false, manipulated, extreme and incendiary statements."— George Soros
Facebook said Thursday it will take further steps to ensure its social network is home to accurate information about the fast-spreading novel coronavirus.
With just weeks to the Iowa caucuses, social media platforms have finalized their rules governing political speech — and fired a starting pistol for political strategists to find ways to exploit them from now till Election Day.
Why it matters: "One opportunity that has arisen from all these changes is how people are trying to get around them," says Keegan Goudiss, director of digital advertising for Bernie Sanders' 2016 campaign and now a partner at the progressive digital firm Revolution Messaging.
Facebook, TikTok and Reddit all updated their policies on misinformation this week, suggesting that tech platforms are feeling increased pressure to stop manipulation attempts ahead of the 2020 elections.
Why it matters: This is the first time that several social media giants are taking a hard line specifically on banning deepfake content — typically video or audio that's manipulated using artificial intelligence (AI) or machine learning to intentionally deceive users.