Computers can be biased, just like us

Computers that learn words from texts written by humans capture their meaning but also our biases, a new study shows.

Why it matters: Machine learning is being eyed to sift through resumes in an effort to reduce discrimination in hiring, analyze loan applications and to predict criminal behavior while reducing racial profiling. The unintended biases found in artificial intelligence raise ethical questions about whether and how to deploy the technology without reinforcing stereotypes. (See Exhibit A, the racist Microsoft bot.)