Researchers in the U.K. are suggesting that — like aircraft — robots be equipped with a black box that records their decisions, the rationale behind them, and their step-by-step actions, per the Guardian. The proposal is a half-way solution to a growing push in the field for "explainable artificial intelligence," or AI with which you can engage in robust discussion about its opinions.
Alan Winfield, a professor at the University of the West of England, and Marina Jirotka, a professor at Oxford University, are proposing the "ethical black box" as a step toward understanding accidents involving robots and AI-backed systems. They were to argue the case for such mandatory systems at a conference Thursday at the University of Surrey.
Why it matters: Winfield argues, "Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It'll be more or less impossible to tell what happened."