Jul 6, 2023 - Technology

NYC law promises to regulate AI in hiring, but leaves crucial gaps

buildings in Hudson Yards

Buildings in New York City's Hudson Yards. Photo: Spencer Platt/Getty Images

A new New York City law is setting a precedent when it comes to protecting workers from bias when companies use artificial intelligence in hiring, but some experts warn that key questions remain regarding how effective it will be.

Why it matters: Advocates argue that the new law is a significant step in the fight to regulate AI in the U.S., which has lagged behind other parts of the world.

  • But while the New York law is the broadest law of its kind passed so far, "god help us if this is the model for the way regulation works on this," Matthew Scherer, senior policy counsel for workers rights at the Center for Democracy and Technology, told Axios.

Driving the news: The new law was enacted in 2021 and the final rules adopted by the city's Department of Consumer and Worker Protection (DCWP) entered into enforcement on Wednesday.

  • The law requires employers to have annual third-party "bias audits" to show that the AI technology they use is free of racist or sexist bias.
  • The law also requires employers to notify job candidates about the use of AI tools in the hiring process.

State of play: The use of AI tools in hiring processes have become increasingly commonplace, from using screening thousands of resumes to AI-led video interviews.

  • For many companies, using AI tools can help companies review large numbers of applications with limited HR resources, Jacob Appel, chief strategist for Orcaa, a consultancy that runs algorithmic audits, told Axios.
  • New York's new law is intended to identify and weed out the use of AI tools that might perpetuate biases, a concern that has plagued AI tools in the past.

But, but, but: The law contains loopholes that undercut its effectiveness, Scherer told Axios.

  • The law has a limited focus on racial and gender discrimination, leaving out other forms of discrimination, like those based off of age or disability.
  • The single statistical test required for the audit merely identifies the proportions of candidates that are selected from each of the different groups compared to one another, whereas a real bias audit would typically be much more robust, Scherer explained.
  • “A bias audit has to be much deeper than running that one statistical test, because bias can make its way into these assessments from a million different angles,” he explained.

The original version of the law was narrowed down in revisions so that it ultimately only applies to "tools that almost completely replace human decision making processes," Scherer said.

  • And while companies that don't comply with the audit will face fines, it's not clear how the fines will be enforced.
  • "The law itself is so vague as to what sorts of penalties can be imposed that pretty much any penalty that does get imposed is likely to be challenged," Scherer said.

Both Appel and Scherer highlighted the fact that the law does not specify exactly how much disparity between gender and racial or ethnic groups is an acceptable amount of bias for an AI tool.

  • Instead the law is focused on whether or not companies have complied with the audit or not, Appel noted.
  • Candidates who feel they may have been discriminated must rely on existing state and federal anti-discrimination laws for recourse, Scherer noted.
  • The law also only applies to hiring and promotion, and not to the ways that companies might recruit candidates, ignoring an "increasingly important way in which the labor force of the United States is being shaped," according to Scherer.

In developing the law, the DCWP strove to "strike the appropriate regulatory balance between the rights of job applicants and the needs of businesses," a spokesperson told Axios.

  • The new law "does not require any specific actions based on the results of a bias audit," but employers and employment agencies must still comply with state, federal and city anti-discrimination laws to determine what actions to take based off the audit results, the spokesperson added.

What we're watching: The New York law could prove to be a model for other states, Appel said, noting that the law's focus on concrete outcomes for real people — when other audits could focus more on other factors — "is the right the right place to start."

  • States like California, New Jersey, New York and Vermont are also working to craft laws regulating the use of AI in hiring, per the New York Times.

Our thought bubble, from Axios' Ryan Heath: This law will almost certainly start a global rule-making trend, but it could easily be a flop.

  • Many companies are set to ignore the law on the basis of its many loopholes, and age and disability discrimination are not covered: two huge problems areas in hiring.
  • It could also be years before clear patterns of bias emerge, delaying any potential response.
  • Another recent new employment law, requiring publication of salary bands for job vacancies was a great idea, but was abused by employers who have sometimes published pay bands so big as to make the information meaningless, such as advertising a salary range of $125,000 to $300,000.

Go deeper: Taming the wild west of AI-based hiring

Go deeper