Artificial intelligence algorithms can indeed create a world that distributes resources more efficiently and, in theory, can offer more for everyone.
Yes, but: If we aren't careful, these same algorithms could actually lead to greater discrimination by codifying the biases that exist both overtly and unconsciously in human society. What's more, the power to make these decisions lies in the hands of Silicon Valley, which has a decidedly mixed record on spotting and addressing diversity issues in its midst.
Airbnb's Mike Curtis put it well when I interviewed him this week at VentureBeat's MobileBeat conference:
"One of the best ways to combat bias is to be aware of it. When you are aware of the biases then you can be proactive about getting in front of them. Well, computers don't have that advantage. They can't be aware of the biases that may have come into them from the data patterns they have seen."
Dig deeper: It also matters what the algorithms are optimizing for. Airbnb, in general, is looking to train its algorithms to learn what factors are most likely to lead to a positive experience for guests when they make their reservation. However, a customer with a racial bias, for example, may be more satisfied when they see only white hosts. But to further Airbnb's goal of an open, non-discriminatory platform, the company has to both recognize this issue, choose to prioritize non-discrimination, and then program accordingly.
Concern is growing:
- The ACLU has raised concerns that age, sex, and race biases are already being codified into the algorithms that power AI.
- ProPublica found that a computer program used in various regions to decide whom to grant parole would go easy on white offenders while being unduly harsh to black ones.
- It's an issue that Weapons of Math Destruction author Cathy O'Neil raised in a popular talk at the TED conference this year. "Algorithms don't make things fair," she said. "They automate the status quo."