Axios Pro Exclusive Content

Tackling AI bias in employment

Illustration of a digital-looking desk with a lamp and computer surrounded by binary code and a circuit pattern.

Illustration: Gabriella Turrisi/Axios

Regulators and employers are now grappling with how AI could transform hiring and recruiting.

Why it matters: Companies are increasingly turning to automated tools to help them make hiring, salary, promotion and other employment decisions.

  • With no federal privacy law protecting Americans, troves of data are collected and used to feed automated employment tools.
  • Lawmakers and regulators worry that if that data isn't screened for bias, those tools could exacerbate stereotypes that largely harm people of color.

Threat level: Most Americans say racial and ethnic bias in employment is a problem, but many said AI would improve rather than worsen the issue, a December 2022 Pew study found.

Yes, but: Lawmakers and regulators say responsibly deploying AI necessitates oversight of the private sector.

  • And companies want clarity on how existing anti-discrimination and civil rights laws apply to the technology, which is something the White House's AI executive order aims to do.

One way to mitigate AI risk in employment and bias more generally is allowing researchers and smaller players, instead of just a handful of big companies, to examine models.

  • The National AI Research Resource is designed to encourage greater participation in AI's development and will bring in an ethics committee to scan the data sets it offers for bias, but lawmakers have not yet funded it.
  • NSF and NIST are hosting a workshop in the spring to discuss the creation of the ethics committee and other NAIRR activities, NSF of Advanced Cyberinfrastructure Office director Katie Antypas told Axios.

Another idea floated in Congress would require companies to be transparent about their bias-auditing processes and their results.

  • The Algorithm Accountability Act would require companies to report impact assessments for bias to the FTC.
  • The FTC would then provide consumers and advocates information on which critical decisions have been automated by companies, along with data sources, high level metrics and how to contest decisions.

What we're watching: whether AI companies will be willing to expose the details of their systems.

What they're saying: Rep. Yvette Clarke, a member of the Congressional Black Caucus and cosponsor of the Algorithmic Accountability Act, said there hasn't been significant pushback as the bill has gone through revisions.

  • She told Axios that existing civil rights laws give the legislation good standing in debates over whether transparency requirements violate First Amendment or proprietary information rights.
  • Clarke, who's a part of the new House AI task force, said she will use the working group to build support for her bill and highlight how data privacy is directly related to AI regulation.
  • "My hope is that through this task force we will be dealing with the issues of algorithmic bias as well as data privacy," Clarke said.
  • "Those are the fundamental building blocks for really drilling down and not relegating ourselves to a replication of the discrimination that we have in our physical world."

Meanwhile, Indeed, one of the largest job sites in the world, says it's already taking action to mitigate AI bias in its systems.

  • On privacy, head of responsible AI Trey Causey said data minimization principles need to be clearly defined.
  • "There is an inherent tension between the data that you need to evaluate bias and to mitigate bias. So I do think the evaluation of AI systems and the data that requires could potentially be at odds with a data minimization standard."
  • On transparency requirements, Causey said he's in favor but that they would be successful only if companies operating in good faith were given a type of "cure period" to remedy any issues, or else testing will be disincentivized because of fear of immediate repercussions.

What's next: The AI executive order directs the Labor Department to work with unions and workers to develop guidelines for employers around equity, protected activity, compensation, and health and safety implications of AI in the workplace.

  • The Labor Department is creating guidance, to be released in the coming months, around algorithmic decision-making and its impact on hiring, supervision and promotion decisions.
  • "We realize that this is a really important issue that is a source of excitement for some and anxiety for others," Muneer Ahmad, senior counsel at the Department of Labor, told Ashley.
  • "It's hard to know if what we're experiencing is something completely new or whether this is similar to other forms of automation or other introductions of technology into the workplace."
  • Of note: You can read the first installment in our AI Meets Equity series here.
Go deeper