Sep 4, 2019 - Technology
Expert Voices

A proposed HUD rule on AI could allow for housing discrimination

A black and white photo of a house broken up by pixelated static.

Illustration: Rebecca Zisser/Axios

HUD recently proposed a rule that would protect financial institutions from liability for using algorithms to make lending decisions, as long as the technology used was produced or distributed by a recognized company.

Why it matters: AI can inadvertently rely on characteristics that include or are correlated with race, gender and socio-economic class, so under the proposed rule, financial institutions could make illegal determinations and hide behind an AI product.

The big picture: Financial institutions are increasingly using AI to detect suspicious activity, optimize portfolios, recommend strategic investments and assess creditworthiness.

  • The impact: There are factors the financial sector may use that — while not explicitly equivalent to race or gender — correlate with those characteristics and can result in discrimination.

How it works: Institutions may decide, for example, that unbanked individuals are less creditworthy, and using this factor in loan decisions could disadvantage people of color and women.

  • Nearly 17% of African Americans and 14% of Hispanic Americans are unbanked, compared to just 3% of white Americans.
  • 15% percent of unmarried female-headed family households are also unbanked.

What's happening: HUD released a proposed rule that would eliminate the disparate impact standard, which prohibits policies or procedures that result in a disproportionate adverse impact on protected groups. It would also shield financial institutions from liability that arises when they use AI-based tools from third parties, like tech companies for instance, whether or not there was knowledge of the problematic algorithm. 

  • What to watch: If the HUD rule is enacted, algorithms could obscure the reason for a credit denial.
  • But if denial notices are required to be made clear to borrowers, in compliance with the Fair Credit Reporting Act, potential homeowners may have some means of identifying illegal or inappropriate grounds for a determination, even if the financial institution is shielded from liability.

The bottom line: There are already challenges in applying anti-discrimination laws to AI-based determinations. The newly proposed HUD rule would make this considerably more difficult.

Miriam Vogel is the executive director of Equal AI, an adjunct professor at Georgetown Law and a former associate deputy attorney general at the Department of Justice.

Go deeper