Jun 8, 2018 - Technology

New tools are making artificial intelligence more fair

Rumman Chowdhury passes a subway station in NYC

Rumman Chowdhury leads Accenture's responsible AI efforts. Photo: Frances Denny

With computer algorithms being called on to make more and bigger decisions, a growing field has emerged to help ensure the models are fair and free of bias. Among the latest efforts is a new "fairness tool" that consulting giant Accenture is detailing at an AI conference next week.

Why it matters: AI is being used to make an increasing array of decisions from who gets parole to whether someone is offered a loan or job. But without rooting out bias in both training data and models, these algorithms risk simply codifying existing human misperceptions.

Accenture is far from alone in trying to develop tools to remove bias from AI.

  • At F8, Facebook talked about Fairness Flow, a tool it says it's using to seek out biases for or against a particular group of people.
  • Recruiting startup Pymetrics developed Audit-AI to root out bias in its own algorithms for determining if a candidate is a good fit for a job. Now the company is releasing it as open source in hopes others may benefit:
"We believe that all creators of technology are responsible for creating the future that we want to live in. For us, that future is one that is bias-free."

How it works: Accenture's tool looks at both the data used to train a model as well as the algorithm itself to see if there are any places where any particular group is being treated unfairly.

Origin story: Rumman Chowdhury, who leads responsible AI at Accenture Applied Intelligence, developed what became the fairness tool with the assistance of a study group of researchers at the Alan Turing Institute. The tool is being formally announced next week at CogX in London.

More people in the room: One of the benefits, Chowdhury said, is that you don't have to be an experienced coder to make use of the tool. That helps promote another important means of combating AI bias: making sure more people are part of the discussion.

"It’s a really good way to start incorporating different people into the AI development process, people who aren't necessarily data scientists."
— Rumman Chowdhury to Axios.

Yes, but: Chowdhury notes the fairness tool isn't a silver bullet. It works best on certain types of models, known as classification models, and needs fixed, rather than continuous, variables.

"I don't want people to think you can push a button and fix for fairness because you can’t. While this is one tool that certainly does help, it doesn’t solve for everything."
— Chowdhury
  • Also, correcting for bias can make an algorithm more fair, but sometimes at the expense of accuracy.

Go deeper: Another key component of ethical AI is transparency. Check out this article for more on the push to create AI that can show its work.

Go deeper