Illustration: Eniola Odetunde/Axios

The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."

Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.

Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.

What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.

  • A recent New York Times op-ed laid out the prevailing argument in its headline "Biased algorithms are easier to fix than biased people."
  • "Discrimination by algorithm can be more readily discovered and more easily fixed," says UChicago professor Sendhil Mullainathan in the piece. Yann LeCun, Facebook's head of AI, tweeted approvingly: "Bias in data can be fixed."
  • But the op-ed was met with plenty of resistance.

The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.

  • Birhane's key point: "This tool that I'm developing, is it even necessary in the first place?"
  • She gave classic examples of potentially dangerous algorithms, like one that claimed to determine a person's sexuality from a photo of their face, and another that tried to guess a person's ethnicity.
  • "[Bias] is not a problem we can solve with maths because the very idea of bias really needs much broader thinking," Birhane tells Axios.

The big picture: In a recent essay, Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.

  • "There's definitely still resistance around it," says Rachel Thomas, a University of San Francisco professor. "A lot of people are getting the message about bias but are not yet thinking about justice."
  • "This is uncomfortable for people who come up through computer science in academia, who spend most of their lives in the abstract world," says Emily M. Bender, a University of Washington professor. Bender argued in an essay last week that some technical research just shouldn't be done.

The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.

  • "AI researchers need to start from the beginning of the study to look at where algorithms are being applied on the ground," says Kate Crawford, co-founder of NYU's AI Now Institute.
  • "Rather than thinking about them as abstract technical problems, we have to see them as deep social interventions."

The impact: Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.

  • Most visibly, campaigns to ban facial recognition technology succeeded in San Francisco, Oakland and Somerville, Mass. This week, nearby Brookline banned it, too.
  • One potential outcome: freezes or restrictions on other controversial uses of AI. This scenario scares tech companies, who prefer to send plumbers in to repair buggy systems rather than to rip out the pipes entirely.

But the question at the core of the debate is whether a fairness fix even exists.

The swelling backlash says it doesn't — especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.

  • "It's anti-scientific to imagine that an algorithm can solve a problem that humans can't," says Cathy O'Neil, an auditor of AI systems.
  • These applications are "AI snake oil," argues Princeton professor Arvind Narayanan in a presentation that went viral on nerd Twitter recently.
  • The main offenders are AI systems meant to predict social outcomes, like job performance or recidivism. "These problems are hard because we can’t predict the future," Narayanan writes. "That should be common sense. But we seem to have decided to suspend common sense when AI is involved."

This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She found that major facial recognition systems struggled to identify female and darker-toned faces.

What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

  • "The real problem is we citizens have no power to even examine or scrutinize these algorithms," says O'Neil. "They're being used by private actors for commercial gain."

Go deeper

Updated 20 mins ago - Politics & Policy

Tim Scott says Trump "misspoke" when he told Proud Boys to "stand by"

Photo: Bonnie Cash/Pool/AFP via Getty Images

Sen. Tim Scott (R-S.C.) told reporters on Wednesday that he believes President Trump "misspoke" when he told the far-right "Proud Boys" group to "stand back and stand by" in response to a question about condemning white supremacy at the first presidential debate.

Catch up quick: Moderator Chris Wallace asked Trump on Tuesday, "Are you willing, tonight, to condemn white supremacists and militia groups and to say that they need to stand down?" Trump asked who specifically he should condemn, and then responded, "Proud Boys, stand back and stand by. But I'll tell you what, somebody's got to do something about antifa and the left."

Updated 28 mins ago - Politics & Policy

Commission on Presidential Debates wants changes

Photos: Jim Watson and Saul Loeb/AFP via Getty Images

The Commission on Presidential Debates announced Wednesday that it plans to implement changes to rules for the remaining debates, after Tuesday night's head-to-head between Joe Biden and Donald Trump was practically incoherent for most of the night.

What they are saying: "Last night's debate made clear that additional structure should be added to the format of the remaining debates to ensure a more orderly discussion of the issues," the CPD said in a statement.

Trump says he doesn't know who Proud Boys are after telling them to "stand by"

President Trump told reporters on Wednesday that he doesn't know who the "Proud Boys" are, after saying at the presidential debate last night that the far-right group should "stand back and stand by" in response to a question asking him to condemn white supremacists.

Why it matters: The comments set off outrage and calls for clarification from a number of Republican senators. After being asked several times on Wednesday whether he will condemn white supremacy, Trump responded: "I have always denounced any form — any form of any of that, you have to denounce. But I also — Joe Biden has to say something about antifa."