Apr 13, 2017

Computers can be biased, just like us

Computers that learn words from texts written by humans capture their meaning but also our biases, a new study shows.

Why it matters: Machine learning is being eyed to sift through resumes in an effort to reduce discrimination in hiring, analyze loan applications and to predict criminal behavior while reducing racial profiling. The unintended biases found in artificial intelligence raise ethical questions about whether and how to deploy the technology without reinforcing stereotypes. (See Exhibit A, the racist Microsoft bot.)

How it works: The researchers created a test for how closely the AI associates different words and uncovered gender and racial biases similar to those of humans that are well-known from psychological studies. They found that European-American names were more closely associated with pleasant words (honest, gentle, happy) whereas unpleasant words (divorce, filth, jail) were more likely to be attributed to African-American names. Young people were considered pleasant, old people were not. They then looked at gender bias and found the AI associated women more so than men with family and the arts than with mathematics.

Thought bubble: Context provides bias but also meaning. How much bias can be removed before that meaning is lost?

The study authors don't recommend untraining the machine because of the risk of removing crucial knowledge about the world. "Artificial intelligence learns biases but it needs the awareness not to make prejudiced decisions. Since machines do possess self-awareness the way humans do, a human in the loop can help machines make ethical decisions," says Princeton's Aylin Caliskan.

Go deeper