YouTube's product chief tells Axios that the Google-owned video site has removed thousands of COVID-19 videos — including some from the Brazilian president's channel — for violating policies related to the spread of medical misinformation.
Why it matters: Though criticized in the past for allowing misinformation to flourish, Facebook, Google and Twitter have all been taking a tougher stand when it comes to the coronavirus.
It's notable that Twitter, like other social networks, has announced stricter rules on virus-related misinformation than other types of false posts. Even more notable, though, is that Twitter has actually enforced its rules against prominent accounts in recent days.
Why it matters: Twitter has been criticized for being lax to enforce its rules, particularly against well-known politicians and celebrities.
A new Russian disinformation campaign targeting Americans on social media operated through satellite outfits in Ghana and Nigeria, according to new reports from CNN and Graphika, in collaboration with Facebook and professors at Clemson University.
Why it matters: Russian efforts to meddle in this year's U.S. elections are evolving in an attempt to avoid detection. In 2016, most state-backed misinformation campaigns went through St. Petersburg. Now, the Kremlin is changing course.
InfoWars host and conspiracy theorist Alex Jones was arrested early Tuesday and charged with driving while intoxicated in Travis County, Texas, the Austin-American Statesman reports.
Details: The 46-year-old radio host, who has been banned from most major Big Tech platforms, was released on bail almost four hours after his arrest. In December, a judge ordered him to pay $100,000 in court costs and legal fees in a case brought by a Sandy Hook family after his unsubstantiated conspiracy theories about the mass shooting.
Tech companies like Twitter and Facebook have struggled with ways label misinformation without appearing biased or without baiting users to game the system.
Why it matters: It may seem obvious that tech companies should let users know when something is false, but sometimes, calling out false content can have unintended consequences.
After initially indicating it would not take action against campaign ads from President Trump that encouraged people to "take the Official 2020 Congressional District Census today," Facebook said Thursday it would take the messages down.
Why it matters: Facebook has generally subjected political advertising to few rules, but had said it would take a tough stand against any posts designed to mislead people about the census.
The chair of the House Judiciary antitrust subcommittee is preparing a bill that would remove liability protections from tech platforms that don't take down false political ads, Bloomberg Law reported Monday.
The big picture: Facebook's policy of not fact-checking political ads has angered Democrats, and tinkering with Section 230 of the Communications Decency Act, which immunizes internet platforms from lawsuits over user-posted material, has become an increasingly popular threat for lawmakers looking to bring Big Tech to heel.
Advances in digital technology are likely to erode trust and harm democracy around the world between now and 2030, according to a plurality of tech experts surveyed for a new Pew Research report.
Why it matters: Online misinformation is already causing a mix of actual harm and widespread fears, and advances like deepfakes are likely to intensify the challenges citizens face.
The Trump campaign, borrowing tactics from dictators and demagogues abroad, is poised to spend $1 billion on "what could be the most extensive disinformation campaign in U.S. history" to sway the 2020 election, McKay Coppins writes in the Atlantic.
Why it matters: Coppins offers the prospect of an election "shaped by coordinated bot attacks, Potemkin local-news sites, micro-targeted fearmongering, and anonymous mass texting."
Twitter on Tuesday announced a new policy aimed at discouraging the spread of deepfakes and other manipulated media, but the service will only ban content that threatens people's safety, rights or privacy.
Why it matters: Tech platforms are under pressure to stanch the flow of political misinformation, including faked videos and imagery. Twitter's approach, which covers a wide range of material but sets narrow criteria for deletion, is unlikely to satisfy critics or politicians like Joe Biden and Nancy Pelosi — who have both slammed platforms for allowing manipulated videos of them to spread.