Ng Han Guan / AP
Google's Deep Mind and Elon-Musk funded OpenAI are hard at work devising methods to make sure artificial intelligence never works against humanity's interest, Wired reports.
"If you're worried about bad things happening, the best thing we can do is study the relatively mundane things that go wrong in AI systems today," Dario Amodei of OpenAI's tells the magazine. "That seems less scary and a lot saner than kind of saying, 'You know, there's this problem that we might have in 50 years.'"
Why their work matters: The researchers are focused on inserting human judgement in machine learning processes. Instead of writing complicated "reward functions" that help AI judge whether its behavior is optimal, they have humans judge and rate AI performance before it amends itself. This collaborative process helps teach humans about how machines learn, while also preventing algorithms from going off the reservation.