top of page
Ravit banner.jpg


Case study: Irresponsible AI in loan decisions

A recent paper shows how AI can make discrimination worse and how popular bias mitigation algorithms fall short.

Here are some highlights. See the attached pdf for a visual version.

➤ Housing discrimination in the US

In the US, black mortgage applicants are

❗️54% less likely ❗️

to get a mortgage loan

➤ AI can make discrimination worse

When off-the-shelf AI was used to make loan decisions

black mortgage applicants were

❗️67% less likely❗️

to get a mortgage loan

➤ Popular bias mitigation algorithms are inadequate

When popular bias mitigation algorithms were used

black mortgage applicants were

equally likely to get a loan


❗️the loan amounts they got were much lower ❗️

In addition, when using popular bias mitigation algorithms

The false positive rate was

❗️362.29% higher❗️

These mistakes are very costly for lenders

(false positives are cases where a bad loan is approved)

➤ The researchers caution against using AI models to process mortgage applications, even if the models are "fair"

➤ Some of my takeaways:

🌟 This case illustrates well how AI can not only reflect social biases but make discrimination worse.

🌟 It is extremely important to track and reduce bias in AI decision-making carefully.

➤ The paper is "AI and housing discrimination: the case of mortgage applications" by Leying Zou & Warut Khern-am-nuai

Irresponsible AI in Lending - Ravit Dotan
Download PDF • 608KB



Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page