Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Ng Han Guan / AP
Google's Deep Mind and Elon-Musk funded OpenAI are hard at work devising methods to make sure artificial intelligence never works against humanity's interest, Wired reports.
"If you're worried about bad things happening, the best thing we can do is study the relatively mundane things that go wrong in AI systems today," Dario Amodei of OpenAI's tells the magazine. "That seems less scary and a lot saner than kind of saying, 'You know, there's this problem that we might have in 50 years.'"
Why their work matters: The researchers are focused on inserting human judgement in machine learning processes. Instead of writing complicated "reward functions" that help AI judge whether its behavior is optimal, they have humans judge and rate AI performance before it amends itself. This collaborative process helps teach humans about how machines learn, while also preventing algorithms from going off the reservation.