Why the EU is forcing AI to explain its behavior
By May of next year, artificial intelligence algorithms in the EU will be legally required to provide users an explanation "every time it uses [their] personal data to choose a particular recommendation or action," writes AI lawyer John Frank Weaver :
- For example: Weaver gives us Amazon's usually reliable Alexa, which recently cued up Sir Mix a Lot, to the befuddlement of its owner.
- Why AI owes an explanation: Weaver says the U.S. should follow suit, in order to promote transparency and greater user control over the technologies that play an increasingly important role in their lives.
- Why we should tread carefully: Thomas Burri, an assistant professor of international and European law at the University of St. Gallen, told Weaver "If the first thing you need to consider when designing a new program is the explanation, does that stifle innovation and development? . . . Some decisions are hard to explain, even with full access to the algorithm...."