Axios Managing Editor Kim Hart led a July 18th roundtable discussion on ethics and bias in new technologies and the responsibilities that fall to government, the private sector and individuals.
The big picture: Guests offered cross-sector perspectives on how explainability, accountability and transparency fit into tech responsibility.
Be smart: Explainability is the key to ethical artificial intelligence, it’s “teaching the computer to explain its decision making," according to Axios' Ina Fried.
Former FTC Commissioner Terrell McSweeny weighed in on the issue of explainability: “Not every consumer is going to need to understand every aspect of how this tech works. But… companies have to organize themselves and govern because explainability is not just for users, it’s also for the regulators and the enforcers. There comes a point where saying ‘I don’t know why it did that’ is not going to be an acceptable answer.’”
Laura Moy, deputy director of Georgetown Law’s Center on Privacy and Technology, on expanding the frame of thinking:
What’s next: The conversation dove into how the federal government and tech companies should balance responsibility when regulating new technologies like AI.
Michael Hind, a Distinguished Research Staff Member at IBM, underlined the need for tech companies to improve their tools to fix bias in data and help explain decisions.
Klon Kitchen, a Senior Research Fellow at the Heritage Foundation, stressed that our current system is set up for “stability, not agility” and challenged the private sector to take an “active approach to identifying working ways forward for self-regulating best practices.”
Nicholas Degani, Special Counselor to the FCC Chairman, on innovation today: “Our market rewards output explainabilites. Consumers need to know what’s inside the box…what can it do and why people are using the product. The things that made Apple so great is that… the product did exactly what you wanted it to do, when you wanted it to do it.”
The other side: Guests also discussed the impact of artificial intelligence on the human experience, including how technology can enable discriminatory practices.
Natasha Durate, the AI Lead for the Center for Democracy & Technology, reminded guests to question who AI works for and challenge the bias against those without access to the interface.
Thank you IBM for sponsoring this event.