Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Rebecca Zisser/Axios
HUD recently proposed a rule that would protect financial institutions from liability for using algorithms to make lending decisions, as long as the technology used was produced or distributed by a recognized company.
Why it matters: AI can inadvertently rely on characteristics that include or are correlated with race, gender and socio-economic class, so under the proposed rule, financial institutions could make illegal determinations and hide behind an AI product.
The big picture: Financial institutions are increasingly using AI to detect suspicious activity, optimize portfolios, recommend strategic investments and assess creditworthiness.
- The impact: There are factors the financial sector may use that — while not explicitly equivalent to race or gender — correlate with those characteristics and can result in discrimination.
How it works: Institutions may decide, for example, that unbanked individuals are less creditworthy, and using this factor in loan decisions could disadvantage people of color and women.
- Nearly 17% of African Americans and 14% of Hispanic Americans are unbanked, compared to just 3% of white Americans.
- 15% percent of unmarried female-headed family households are also unbanked.
What's happening: HUD released a proposed rule that would eliminate the disparate impact standard, which prohibits policies or procedures that result in a disproportionate adverse impact on protected groups. It would also shield financial institutions from liability that arises when they use AI-based tools from third parties, like tech companies for instance, whether or not there was knowledge of the problematic algorithm.
- What to watch: If the HUD rule is enacted, algorithms could obscure the reason for a credit denial.
- But if denial notices are required to be made clear to borrowers, in compliance with the Fair Credit Reporting Act, potential homeowners may have some means of identifying illegal or inappropriate grounds for a determination, even if the financial institution is shielded from liability.
The bottom line: There are already challenges in applying anti-discrimination laws to AI-based determinations. The newly proposed HUD rule would make this considerably more difficult.
Miriam Vogel is the executive director of Equal AI, an adjunct professor at Georgetown Law and a former associate deputy attorney general at the Department of Justice.