May 1, 2021 - Technology

Using AI to root out unconscious bias

Illustration of a mouse hovering over digital folders.

Illustration: Annelise Capossela/Axios

AI language models are being used to identify instances of racial and gender bias in employee performance reviews.

Why it matters: The tech industry — and all industries — have an ongoing problem with bias in the workplace. AI systems that parse text can help identify bias in at least one area: who companies decide to hire and promote.

How it works: Text IQ, an AI startup that focuses on uncovering latent risk in unstructured data like reports and financial data, recently launched its Unconscious Bias Detector.

  • Its AI system can scan employee performance reviews and identify, for example, whether male managers in a company are more likely to give higher scores to male workers.
  • It can also parse the text in written reviews and identify "work-focused language versus personality-focused language," says Apoorv Agarwal, Text IQ's CEO.
  • If a manager is giving one group of workers reviews that focus much more on personality versus work performance, that suggests some element of bias may be at work.

What they're saying: "Our goal with this is if we can make something unconscious conscious, that's already doing a lot," says Omar Haroun, Text IQ's COO.

The catch: While natural language processing models like this one have made major leaps in recent years — in part by being able to "combine the social aspect of text along with the linguistic," notes Agarwal — they're far from perfect and shouldn't be relied upon alone.

Context: As firms across industries begin to take diversity and inclusion more seriously, they need to rely on robust data about what's actually happening inside their companies, says Tauhidah Shakir, vice president of human resources and chief diversity officer at Paylocity.

  • "We're looking at all of the available information to say, 'Where do we need to focus?'" she adds. "And we wouldn't be able to do that without that data."

Go deeper: The perils of AI emotion recognition

Go deeper