Political strategists to find ways to navigate the new rules of Big Tech.Jan 14, 2020 - Economy & Business
Facebook, TikTok and Reddit all updated their policies on misinformation this week.Jan 10, 2020 - Technology
Internet companies are weighing limiting their ad targeting as a way to curb the misinformation maze.Nov 17, 2019 - Economy & Business
One set of rules for politicians or "world leaders," another for the rest of us.Oct 20, 2019 - Politics & Policy
It's switching from employees to volunteers.Oct 17, 2019 - Technology
Facebook has suspended the account of Ukrainian lawmaker Andrii Derkach, an associate of Rudy Giuliani accused by the U.S. of being "an active Russian agent for over a decade," for election interference activity.
Why it matters: The U.S. Treasury Department sanctioned Derkach in September for "alleged efforts to interfere in the U.S. presidential election," including by releasing edited audio tapes and other unsubstantiated claims to denigrate Joe Biden and other officials.
YouTube announced Thursday that it is expanding its hate and harassment policies to prohibit content that targets an individual or group with conspiracy theories, like QAnon, that have been used to justify real-world violence.
Why it matters: It is the latest tech giant to crack down on QAnon content, which has seen record online interest in 2020.
Facebook and Twitter on Wednesday took steps to limit the circulation of a New York Post story about Hunter Biden, deploying throttles that have been built in an effort to avoid repeating mistakes of 2016.
Why it matters: In the run-up to November's election, online platforms have designed circuit breakers to limit the spread of hacked emails and foreign meddling. In 2016, such material helped shape the political fight, and social media took much of the blame.
Peloton, the networked fitness-bike seller, has found itself in the position of having to scour its forums and leaderboards to remove hateful speech.
The bottom line: It highlights how toxic the social media environment is in 2020. If it's online and social, it's probably going to require moderation.
Social media platforms are scrambling to crack down on domestic actors who have picked up foreign meddling techniques to try to influence the 2020 election — an effort that's resulted in a spate of action against U.S.-based conservatives.
The big picture: Domestic influence campaigns are not new, but tech firms are more aware of them this cycle. The companies also have more help from intelligence agencies and media companies to help uncover these operations and shut them down.
Misinformation related to President Trump's COVID-19 diagnosis has swarmed social media and the broader web since Friday, with claims that Trump is faking his illness gaining particular traction, according to data provided to Axios by social intelligence firm Zignal Labs.
Why it matters: Moments of national urgency are now becoming flashpoints in digital information wars, with misinformation being spread far and wide by malicious actors, conspiracy theorists and earnest dupes.
If social media platforms don't start dealing much more aggressively with altered audio and video, they risk seeing their platforms devolve into a sea of faked content, experts tell Axios.
Why it matters: The platforms are already struggling to deal with manipulated media, and the technology to create "deepfakes," which are fabricated media generated by machine-learning-based software, is improving rapidly.
Facebook said Wednesday that it was removing a series of ads from President Trump's campaign that linked American acceptance of refugees with increased coronavirus risk, a connection Facebook says is without merit.
Why it matters: The ads were pulled after they received thousands of impressions and are a sign that the Trump campaign continues to test the limits of social media rules on false information.
Experts are seeing malicious groups, both foreign and domestic, shift to more advanced campaigns of disinformation than they had in 2016, Nina Jankowicz, disinformation fellow at the Wilson Center, said Wednesday at an Axios virtual event.
Why it matters: The method, called "disinformation laundering," targets false ideas or conspiracy theories that could become legitimized through media or public figures and politicians.
The technology to produce fake video and audio has become sophisticated enough to make doctored or wholly fabricated images and sound impossible for the public to detect, Hany Farid, professor at the University of California, Berkeley, Electrical Engineering and Computer Sciences & School of Information, said Wednesday at an Axios virtual event.
The big picture: Deepfakes, or computer-synthesized images, audio or video, have caused experts to worry about Silicon Valley's ability to meet the challenge of tracking and stopping these AI-generated clips once they become widespread.