Facebook’s prioritization of openness and accessibility has made it a target of critics who say it isn't doing enough to stomp out fake news and misinformation.
Why it matters: Experts argue Facebook could do a number of things to reduce fake news. But almost every option could hit its bottom line or would challenge CEO Mark Zuckerberg’s long-term vision for the company.
Sources say that Zuckerberg prefers to manage the fake news problem through rational, careful experiments and modest changes. He often sees this as a PR problem, not a foundational one.
- But other Facebook executives are pushing for more radical changes, realizing the free-speech zeal that brought them fame and riches is causing constant and growing problems.
The most drastic changes Facebook could make to its policies include:
- Removing news from the platform altogether.
- Pre-approving content before letting users post it.
- Removing all monetization efforts around news.
Some experts suggest more realistic options:
1. Ramp up its enforcement regime: Facebook has committed to hiring 20,000 moderators by the end of the year to help monitor content. Many critics argue Facebook could not realistically monitor the billions of photos, posts, and videos uploaded to the platform daily with 20,000 people.
- Facebook says that these moderators, combined with artificial intelligence and fact-checking partnerships, have made the platform much safer. It currently has experts in 50 countries, speaking dozens of different dialects, to help manage free speech issues that are nuanced based on different cultural expectations.
2. Draw tighter lines around what's considered harmful content: Facebook generally uses two broad questions when evaluating which types of news stays and goes: Is the user or Page authentic and is the post causing real-world harm, like inciting violence.
- Facebook says it will take down posts that lead to violence, but some experts argue they could be more discerning in how they identify potentially harmful content. Others say that, in doing so, they make themselves vulnerable to losing protections under Section 230 protections of the Communications Decency Act, which prevents web sites from being liable for content posted by users.
- Brent Merritt, Digital Strategy Consultant at Metric Communications, says Facebook refuses to draw the line at "malicious falsehoods" because it's "afraid of getting it wrong and drawing the ire of Trump trolls."
- But, he points out that the company is already hiring tens of thousands of people to identify false information. "So make it policy!"
3. Lower the threshold of removal. Facebook won’t publicly say how many pieces of nefarious content a Page or person needs to post before being removed, out of fear that it could lead bad actors to abuse the system. Users have pointed to examples in which bad actors have repeatedly shared false information without being removed. Most recently, Facebook said it wouldn't remove InfoWars from the platform, even though it continues to promote conspiracy theories.
4. Use stricter authentication of users: Currently, Facebook requires that users must be over the age of 13, and it requires users to use their “everyday names” and valid email accounts to set up a profile. It also requires advertisers to have further credentials, like mailing addresses. While Facebook takes down hundreds of million accounts per quarter (586 million in the first 3 months of 2018), some security experts argue stricter measures could be put into place at the time of account creation.
5. Remove more financial incentives for spreading fake news: Currently, when something is fact-checked as being false on Facebook, it gets down-ranked on the platform to remove financial incentives by driving less traffic to ad-funded Pages. If done enough times, Facebook removes the advertising rights of Pages on the platform or, in some cases, it temporarily bans the accounts.
- But in some cases, users can regain their accounts or ad rights after a certain period of time and if they demonstrate positive behavior. Some argue Facebook should make punishments permanent.
6. More clearly show when content comes from an established brand: Facebook has made minor adjustments, like adding news brands logos to stories.
- "The brand is a proxy for trust and Facebook (and Google) have long minimizes the brand in their experiences," says Jason Kint, president of Digital Content Next, the digital publishers' association. "This is important for brands who have built up trust through their reputation but it’s also important to newer publishers who want to build their brands."
7. Separate news from social media: Some experts argue that Facebook could take a page from Snapchat's playbook and limit access to news and information to vetted destinations on the platform, like Facebook Instant Articles and Watch. Facebook has somewhat moved in this direction by down-ranking news in its News Feed and instead placed more emphasis on paying news organizations to build shows on Watch.
The big picture: After thousands of headlines over the past year about Facebook and fake news, critics and the media are beginning to take a slightly more empathetic view towards Facebook's tricky position.
Case-in-point: A number of media and tech journalists, like TechCrunch's Josh Constine, The Washington Post's Margaret Sullivan and The Information's Jessica Lessin have recently come out in support of Facebook avoiding broad censorship of viewpoints on the platform.
The bottom line: Facebook may not be able to do much more than it has already tried, unless it makes a drastic change that would impact its business and long-term vision.