Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Facebook CEO Mark Zuckerberg in 2016. Photo: Manu Fernadez / AP
Facebook said Monday it would use artificial intelligence to identify posts and live videos where users are expressing suicidal thoughts.
Why it matters: The company has been under pressure for live-streaming violent and graphic incidents, while being accused of not having the human resources to successfully moderate live content on its platform.
The details:
- Facebook's Guy Rosen said the company would use "pattern recognition technology to help identify posts and live streams as likely to include thoughts of suicide."
- That technology will launch outside of the U.S. and, according to Facebook, ultimately be used around the world but not in the European Union.
- The company will also be tasking more human employees with vetting reports of possible self-harm on the platform.
Go deeper: Our report earlier this year on how Facebook Live had been hosting videos of gruesome crimes.