Nov 17, 2021 - Technology

Taming the wild west of AI-based hiring

Illustration of binary code being pulled away by a rodeo rope.

Illustration: Aïda Amer/Axios

New laws and regulations are aiming to draw some boundaries for the fast-growing but often black-box approach of using AI to hire employees.

Why it matters: Companies large and small have embraced AI-based tools to screen, assess and select job candidates, but algorithmic approaches have been largely unregulated and risk perpetuating biases on race, gender, disability and more.

Driving the news: Earlier this month, the New York City Council passed one of the first laws attempting to regulate the use of automation and AI in hiring, requiring New York-based employers to conduct a bias audit on automated tools before using them to evaluate job candidates.

  • At the end of October, the Equal Employment Opportunity Commission announced a new initiative to ensure AI tools used in hiring and other workplace decisions comply with existing federal anti-discrimination laws.

By the numbers: As companies struggle to find workers — there are more than 10 million job openings, significantly higher than the number of unemployed workers — they've increasingly turned to AI and automation to streamline the hiring process.

  • An estimated 99% of Fortune 500 companies use at least some form of an automated applicant tracking system for screening candidates, according to Jobscan.
  • A worldwide survey of thousands of human resources professionals found their use of predictive analytics — an approach that uses AI-based tools to project how applicants might perform in a job — has increased from 10% in 2016 to 39% in 2020.

How it works: The most basic and widespread version of AI hiring tools can screen thousands of resumes at the first stage of the job search, looking for specific terms and qualifications that match previously successful applicants.

  • More advanced tools might use everything from facial and emotional analysis on video interviews to algorithmically analyzed computer games to predict a candidate's personality and how well they might be suited for a specific position.
  • "Instead of just looking at resumes, we collect behavioral data through exercises that tell us about applicants' soft and social skills, and which are unbiased for race and gender," says Frida Polli, co-founder and CEO of New York-based Pymetrics, which provides hiring services for companies like McDonald's and Kraft Heinz.

Background: The shift to automated hiring in part reflects a broader change in how companies find workers, with companies moving from in-house human resources executives to recruitment process outsourcers.

  • As companies changed their focus from promoting insiders to fill positions to searching for external candidates, the number of applicants per corporate job posting rose from 120 in the early 2010s to 250 by the end of the decade, providing an opening for automated tools.

The catch: For all the scientific-sounding promises of AI hiring vendors, the nascent field is closer to "the Wild West," as Wharton School management expert Peter Cappelli has put it.

  • In 2017, Amazon notoriously had to abandon a hiring screening algorithm after discovering it down ranked terms like "women" because the tool had been trained on years of Amazon hires that were disproportionately men.
  • The predictive power of newer approaches that rely on tools like sentiment analysis in video interviews is largely unproven and could be biased against certain segments of the population.
  • A study by researchers at the Harvard Business School found the growing use of automated tools could filter out otherwise worthy candidates who might have been out of work for years, worsening the problem of the long-term unemployed who currently make up about a third of jobless Americans.

What's next: New laws like the one from New York City could help improve AI hiring tools by requiring them to undergo an audit for bias from an outside company.

  • Polli notes that Pymetrics hired a team from Northeastern University to audit its algorithmic tests to ensure they didn't produce a disparate impact on race or gender, and she says the New York law "could be a big moment for the responsible regulation of this field."
  • Yes, but: Matt Scherer, senior policy counsel at the Center for Democracy and Technology, says the New York law is too narrow, focusing on race and gender versus other characteristics like disability status or age and only looking at hiring, as opposed to the use of algorithms for assessment and promotion.
  • "There's a real risk that the bill will be seen as the standard almost by default," he adds.

What to watch: What steps the EEOC ultimately takes on algorithmic hiring and assessment tools, and whether a federal bill to mandate audits for large companies using AI — first proposed in 2019 — ever becomes law.

Go deeper