Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Noah Berger / AP
Facebook is rolling out its fact-checking program for "related articles," more broadly using machine learning, a Facebook spokesperson tells Axios.
- The spokesperson says Facebook's machine learning tools have gotten better at detecting more potential hoaxes to send to fact checkers since running tests for several months.
- The rollout will use updated machine learning to detect more potential hoaxes to send to third party fact checkers.
- For example, if an article has been reviewed by fact checkers, they will show the fact-checking stories below the original post.
Why it matters: This is the latest of steps Facebook has taken to limit the spread of false news on its platform and help expose users to a more diverse range of topics and viewpoints on Facebook. Earlier this year, Facebook CEO Mark Zuckerberg laid out an improved vision for Facebook, that includes helping create a better-informed community.
Timeline: According to a Facebook spokesperson, the company has been doing research since April that shows fact checkers' articles in the 'Related Articles' unit do in fact help people identify whether the news they are reading is misleading or false. Tests have been run in response to people telling the social platform that they want more context to make informed decisions about what they read and share.