Mar 6, 2019 - Technology

How human bias seeps into autonomous vehicles' AI

A new study suggests black people are more likely to get hit by an autonomous vehicle than white people, Vox writes.

Why it matters: The findings are the latest example of how human bias seeps into artificial intelligence. If AVs are trained with data that includes only light-skinned people as examples of what constitutes a "human," they won't recognize dark-skinned people as also "human" in the real world.

Details: The study, by researchers at the Georgia Institute of Technology, tried to determine how accurately state-of-the-art object-detection models, like those used by self-driving cars, detect people from different demographic groups, Vox explains.

  • Researchers divided a large dataset of images that contain pedestrians by skin tone.
  • Then they compared how often the AI models correctly detected the presence of people in the light-skinned group versus how often they got it right with people in the dark-skinned group.
  • Detection of dark-skinned people was 5 percentage points less accurate.

The bottom line: AI, including that in AVs, can be just as biased as their creators and this needs to be addressed.

  • Samantha Huang, a senior associate at BMW iVentures, wrote about the problem last fall, after observing while riding in the back of an AV test vehicle that it failed to detect 2 pedestrians who were black.
  • Had these engineers come from more racially diverse backgrounds, she wrote, they probably would have been less likely to plug in only images of light-skinned people into their algorithms.

Go deeper: Humans cause most self-driving car accidents

Go deeper