Get the latest market trends in your inbox

Stay on top of the latest market trends and economic insights with the Axios Markets newsletter. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Eniola Odetunde/Axios

When it comes to combating misinformation, research shows that it's more effective for authoritative figures to present accurate facts early and routinely alongside misinformation, rather than to try to negate every piece of misinformation after-the-fact by labeling it false or by calling it out as false.

Why it matters: The research provides a roadmap for more effective and efficient management of the coronavirus "infodemic" by health experts, government officials, internet platforms and news companies.

1. Proactive messaging: According to research from Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania and co-founder of FactCheck.org, gaps in the public's background knowledge about common sense flu cures, like whether vitamin C prevents viruses, show "ongoing need for effective communication of needed information long before a crisis."

  • In an interview with Axios, Hall Jamieson argues that health experts have done a good job proactively messaging to the population in the past about the benefits of hand-washing to prevent flu-like viruses, like the common colds.
  • As a result, the public was not susceptible to misinformation around hand-washing, like it was around vitamin C being a cure for coronavirus. (Vitamin C is still an unproven cure for the common cold, despite decades-old unproven myths that it is.)

2. Pre-bunking: Australian psychologist and professor Stephan Lewandowsky, who chairs the Cognitive Psychology department at the University of Bristol, argues that if people are made aware of the flawed reasoning found in conspiracy theories, they may become less vulnerable to such theories.

  • To an extent, this is similar to the strategies some social media companies have taken when posting warning labels about misinformation alongside content in a news feed that users can encounter before deciding whether to click into the article.
  • Lewandowsky notes in his new conspiracy theory handbook published in March that when it comes to certain content, like anti-vaccination conspiracy theories, pre-bunkings "have been found to be more effective than debunking" after-the-fact.

3. Label misinformation at the source level: In order to avoid chasing thousands, if not millions of pieces of misinformation during an "infodemic," Steven Brill and Gordon Crovitz, co-CEOs of NewsGuard, argue it's better to rate the sources of misinformation that are repeat offenders, like certain websites or authors, rather than pieces of content themselves.

  • "Any of the websites now promoting COVID-19 hoaxes, like 5G causes, were publishing hoaxes a few months ago about 5G causing cancer," says Crovitz. "It underscores the importance of labeling misinformation at the domain layer. It makes it much harder for those hoax websites to succeed in promoting new hoaxes."
  • Brill notes that in using humans to manually rate sources, they are able to avoid the lack of transparency criticisms that platforms often receive for using artificial intelligence and opaque algorithms to identify misinformation.
  • "That's how you achieve scale is rating the reliability of sites, not individualizing articles," says Crovitz.

4. Go where fake news spreads: According to Hall Jamieson, it's especially important that health care officials spread context in venues where people generally receive misinformation.

  • It's for that reason she says Anthony Fauci is smart to appear on Sean Hannity's Fox News opinion show, as well as Chris Matthew's Sunday news program.
  • "If you don't go into same venue where the misinformation originally spread, than you're not likely to reach audience the audience that heard it originally," she says.

5. The 10% rule: Some experts, including Hall Jamieson, say it's better to wait until a piece of misinformation reaches a 10% penetration level amongst the population before it's debunked, otherwise, you risk unintentionally spreading the rumor further before it may ever reach a point where its truly problematic.

  • Brill and Crovitz push back on this rule, and argue that if it's possible to provide context around misinformation earlier than it reaches that penetration level, you should.

6. Prioritizing misinformation: Hall Jamieson says that in addition to understanding what has the threshold to warrant debunking, health officials, policymakers, news organizations and others need to evaluate how problematic certain forms of misinformation are when determining how much they should invest in providing context.

  • The recent misinformation around disinfectants being used to stop COVID-19 are a good example of the type of misinformation that warrants immediate context and resources for health officials to debunk, versus misinformation around where the virus came from, for example.

Yes, but: Many of these efforts don't acknowledge the fact that people have become increasingly biased towards information, regardless of its validity, that backs their political viewpoint.

  • "The big question regarding misinformation as it pertains to coronavirus would be the degree to which it has been politicized," says Joshua Tucker, a professor of politics and co-director of the Center for Social Media and Politics at New York University.
  • "We have found in our research that people are much less likely to correctly identify false or misleading news as such if it aligns with your own political preferences."

Be smart: To an extent, tech platforms have taken this route as well by removing misinformation that they think could cause real-world health problems.

  • "Facebook are being more aggressive about actually removing certain types of COVID-related misinformation/disinformation, and not just providing correctives, which I think is a welcome development," says Philip Napoli, a professor at Duke University's Sanford School of Public Policy.

The big picture: When society began to seriously reckon with "fake news" and misinformation after the 2016 election, there were may efforts to impose binary solutions by identifying information as being true or false, and blocking or removing it accordingly. Experts say this is problematic for two reasons:

  1. The backfire effect: Some experts have found that when presented with a binary label, consumers will be incentivized to click something that's labeled "false" simply out of curiosity. Lewandowsky says he was never able to prove that entirely, but has concluded that "if people are presented with explanations affirming facts or refuting myths, belief in facts may be sustained over time."
  2. The assumption that everything's been evaluated: "When someone sees something labeled as false, they assume everything else is true. The problem is that a lot of stuff that's not true exists and just hasn't been flagged yet," says Hall Jamieson.

Between the lines: Tech companies have struggled to figure out the best way to flag misinformation without incentivizing people to click further into it.

  • In 2017, Facebook said it would no longer use "Disputed Flags" — red flags next to fake news articles — to identify fake news for users because it caused more people to click on the debunked posts. Instead, the company now uses "warning labels," and they appear to be working much better. According to the company, only 5% of the time people that were exposed to those labels went on to view the original content.
  • In January, Twitter began guiding users to authoritative sources using a search prompt to make it easier for users to encounter facts while browsing tweets in their timelines. The company has also expanded its verification policies to make it easier to identify when information is coming from credible sources.
  • YouTube has developed fact check information panels that offer users context about misinformation as they encounter it through videos on its platform that don't violate its policies for removal.

The bottom line: There's no silver bullet in solving the misinformation crisis surrounding the coronavirus pandemic, but more conclusive research on the topic, and specifically how it pertains to the internet age, can be used as a helpful roadmap moving forward.

Go deeper

Aug 6, 2020 - Technology

Ex-U.S. chief data scientist: Social media misinformation is "life or death"

Former U.S. Chief Data Scientist DJ Patil warned at an Axios virtual event Thursday that the "tremendous amount" of misinformation on social media platforms "creates public distrust at a time when we need it the most," stressing: "It's no small statement to say this is life or death."

What he's saying: "One of the areas that will likely, even if we get a vaccine, cause an issue is will people trust a vaccine? And if we don't address those misinformation issues right now, we are going to have a far extended impact of COVID," Patil, who is now head of technology at Devoted Health, told Axios' Kim Hart.

Twitter to label state-affiliated media accounts

Photo Illustration: Omar Marques/SOPA Images/LightRocket via Getty Images

Twitter will begin labeling accounts belonging to state-affiliated media outlets from countries on the U.N. Security Council, it announced Thursday.

The big picture: The new policy will affect “outlets where the state exercises control over editorial content” in China, France, Russia, the U.K., and the U.S., according to the announcement.

Updated Aug 6, 2020 - Axios Events

Watch: Ethical tech in crisis

On Thursday August 6, Axios Cities author Kim Hart hosted a conversation on how technology companies are responding to the pandemic, featuring former U.S. Chief Data Scientist DJ Patil and Human Rights Watch Executive Director Kenneth Roth.

DJ Patil unpacked how tech companies are building ethical and responsible tech centered on privacy and transparency during a time of crisis.

  • On the issue of misinformation during a pandemic: "It's no small statement to say [misinformation] is life or death. And so platforms have responsibility right now to figure out what is the right level of action at a bare minimum. It's creating stricter standards for how and what is allowed on their platforms."
  • On his concerns with the lasting consequences of quickly developing COVID-19 response technology: "It's easy to say this technology can be beneficial. But I have very serious reservations about it being deployed. What happens once it's deployed? Do we keep that in place after a pandemic? Those are the questions that we should be prepared to answer right now."

Kenneth Roth discussed different contact tracing models, highlighting the Bluetooth-based contact tracing system designed by Apple and Google.

  • On apps that use Bluetooth technology rather than location data for contract tracing: "Not relying on location data is a huge step forward in terms of privacy...[The app] did not identify infector, [it] simply told somebody that you were near somebody who was infected. They didn't put the data in a central database that the government might use for other reasons."
  • On the responsibility of Big Tech when it comes to moderating what contract tracing apps are allowing in their stores: "When you have problematic uses of technology of this sort, Google and Apple shouldn't participate. They should say we're not going to let you put apps like this on our stores if you're going to be using it this highly abusive way."

Axios co-founder and CEO Jim VandeHei hosted a View from the Top segment with
Chief Ethical and Humane Use Officer at Salesforce Paula Goldman who discussed Salesforce's work on ethical tech development.

  • On having clear priorities in developing ethical technology: "Even though there's no definition of responsible tech for a pandemic, we need to think about things like privacy. We need to think about how vulnerable groups [are] being affected."

Thank you Salesforce for sponsoring this event.