A surge of nefarious activity online has created new businesses, research disciplines and newsroom beats focused on studying and combating internet propaganda.
Why it matters: Americans were mostly caught flat-footed by the sophistication of state-sponsored and fringe misinformation attacks leading up to the 2016 election. Now, a variety of groups — from academics to journalists — are mobilizing to try to stay ahead of it.
The big picture: Every expert Axios has spoken to about fighting misinformation agrees that no one institution has enough visibility to piece together a full picture of the underlying campaigns perpetuated by bad actors.
- The only way to stay on top of the threat is to increase the attention and resources being spent on learning about online fake news across a variety of sectors.
Non-profits: One leader in the field is the Digital Forensic Research Lab within The Atlantic Council, which has been in close communications with platforms, including Facebook, to help study the campaigns. The Lab has roughly a dozen people in different time zones studying behaviors and patterns of bad actors and fake news.
- Ben Nimmo, information defense fellow at the Lab, says tracking misinformation campaigns is like "building a jigsaw puzzle with all of the little bits and pieces you find, using as much evidence as you can to piece information together."
- Other non-profits, like The Knight Foundation, are running programs that grant funds to those that have new ideas about how to spur news literacy and combat fake news.
- The Data & Science Research Institute, led by founder and president Danah Boyd, is also working to combat fake news through the study and analysis of data-driven automated technologies, including social media platforms.
For-profits: Businesses have also begun to fight misinformation and vet content online. Storyful, a social media intelligence company bought by News Corp in 2013, aims to find and address misinformation campaigns in real time.
Their clients, ranging from media companies to platforms, rely on them to weed through existing threats and quickly identify new ones.
News and advertising industry: The advertising industry has been particularly vigilant about weeding out fake news and misinformation because brands are more wary of placing ads next to untrustworthy content. Groups like Sleeping Giants, which calls out brands that advertise on propagandist websites, have caused thousands of advertisers to flee from websites like Breitbart and Infowars.
- Longtime journalists Steven Brill and Gordon Crovitz launched NewsGuard earlier this year, which will hire dozens of journalists as analysts to review news websites ahead of the midterm elections.
- Some advertising fraud companies, like White Ops, are using fraud detection tools to help identify behaviors of bad actors using fake news to make money.
Journalists: Prior to the 2016 election, most journalists covered media through the lens of well-known institutions, like cable news. After the election, dozens of newsrooms assigned journalists to beats covering misinformation and fake news. CNN, Buzzfeed News, The Daily Beast and others have been particularly aggressive.
Academics: More universities and are creating programs to study misinformation and online propaganda.
- For example, Jonathan Albright, the director of the Digital Forensics Initiative at the Tow Center for Digital Journalism, has done extensive research on how the spread of misinformation through ad technology and social media affects elections.
- Other universities, like Stanford, Vanderbilt and Harvard, are also ramping up research efforts into fake news and misinformation.
Big tech: The tech companies, whose cloud and social media technologies are often being used to host and spread misinformation, are pouring resources into the fight. HP, Google, and others are spending millions into initiatives to fight fake news.
- Facebook, Google, Twitter and others are implementing programs to fight fake news on their own platforms, and are hiring tens of thousands of contractors to help moderate fake news. But many critics argue they still aren't doing enough, given the enormous revenue they make off of automated content.
The bottom line: Even though the global threat of misinformation is getting bigger as bad actors becomes more sophisticated, there's a much higher level of awareness and attention towards the issue than ever before.