5. Social media moves to police truth
For years, Facebook and other social media companies have erred on the side of lenience in policing their sites and allowed most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor, Axios emerging tech reporter Kaveh Waddell writes.
- What's new: The companies are considering a fundamental shift with profound social and political implications, deciding what's true and what's false.
- Why it matters: The new approach would rein in manipulated media — from sophisticated, AI-enabled video or audio deepfakes to super-basic video edits like that much-circulated, slowed-down clip of Nancy Pelosi.
Between the lines: This would be a significant concession to critics who say the companies have a responsibility to do much more to keep harmful false information from spreading unfiltered.
- It would also be an inflection point in the companies' approach to free speech, which has thus far been that more is better and that the truth will bubble up.
Pressure from D.C. is mounting. House Intelligence Chairman Adam Schiff asked Facebook, Twitter and Google in July how they are dealing with deepfakes.
- The companies pointed to existing policies against nonconsensual porn and election manipulation, but said they were entertaining new ones.
The big issues that hang over the companies:
- How to decide when manipulated media is appropriate.
- Whether to take an offending post down, hide it or label it.
- How to label it.