Dec 14, 2019 - Technology

A tug-of-war over biased AI

Illustration of a computer with a frowning face

Illustration: Eniola Odetunde/Axios

The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."

Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.

Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.

What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.

  • A recent New York Times op-ed laid out the prevailing argument in its headline "Biased algorithms are easier to fix than biased people."
  • "Discrimination by algorithm can be more readily discovered and more easily fixed," says UChicago professor Sendhil Mullainathan in the piece. Yann LeCun, Facebook's head of AI, tweeted approvingly: "Bias in data can be fixed."
  • But the op-ed was met with plenty of resistance.

The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.

  • Birhane's key point: "This tool that I'm developing, is it even necessary in the first place?"
  • She gave classic examples of potentially dangerous algorithms, like one that claimed to determine a person's sexuality from a photo of their face, and another that tried to guess a person's ethnicity.
  • "[Bias] is not a problem we can solve with maths because the very idea of bias really needs much broader thinking," Birhane tells Axios.

The big picture: In a recent essay, Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.

  • "There's definitely still resistance around it," says Rachel Thomas, a University of San Francisco professor. "A lot of people are getting the message about bias but are not yet thinking about justice."
  • "This is uncomfortable for people who come up through computer science in academia, who spend most of their lives in the abstract world," says Emily M. Bender, a University of Washington professor. Bender argued in an essay last week that some technical research just shouldn't be done.

The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.

  • "AI researchers need to start from the beginning of the study to look at where algorithms are being applied on the ground," says Kate Crawford, co-founder of NYU's AI Now Institute.
  • "Rather than thinking about them as abstract technical problems, we have to see them as deep social interventions."

The impact: Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.

  • One potential outcome: freezes or restrictions on other controversial uses of AI. This scenario scares tech companies, who prefer to send plumbers in to repair buggy systems rather than to rip out the pipes entirely.

But the question at the core of the debate is whether a fairness fix even exists.

The swelling backlash says it doesn't — especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.

  • "It's anti-scientific to imagine that an algorithm can solve a problem that humans can't," says Cathy O'Neil, an auditor of AI systems.
  • These applications are "AI snake oil," argues Princeton professor Arvind Narayanan in a presentation that went viral on nerd Twitter recently.
  • The main offenders are AI systems meant to predict social outcomes, like job performance or recidivism. "These problems are hard because we can’t predict the future," Narayanan writes. "That should be common sense. But we seem to have decided to suspend common sense when AI is involved."

This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She found that major facial recognition systems struggled to identify female and darker-toned faces.

What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

  • "The real problem is we citizens have no power to even examine or scrutinize these algorithms," says O'Neil. "They're being used by private actors for commercial gain."
Go deeper