But in the largest study ever of real-world mortgage data, economists Laura Blattner of Stanford University and Scott Nelson of the University of Chicago show that differences in mortgage approval between minority and majority groups are not just due to bias, but to the fact that Minority and low-income groups have less data on their credit histories.
This means that when this data is used to calculate a credit score and this credit score is used to make a prediction about the default, that prediction will be less accurate. It is this lack of precision that leads to inequality, and not just to bias.
The implications are glaring: better algorithms will not solve the problem.
“This is a really striking result,” says Ashesh Rambachan, who studies machine learning and economics at Harvard University, but was not involved in the study. Bias and uneven credit records have been hot issues for some time, but this is the first large-scale experiment that examines the loan applications of millions of real people.
Credit scores bring together a range of socio-economic data, such as employment history, financial records, and purchasing habits, into one number. In addition to deciding loan applications, credit scores are now used to make many life-changing decisions, including decisions about insurance, hiring, and housing.
To understand why minority and majority groups were treated differently by mortgage lenders, Blattner and Nelson collected credit reports for 50 million anonymized American consumers and linked each of those consumers to their socio-economic details drawn from a set. marketing data, their deeds and mortgage transactions; and data on the mortgage lenders who have granted them loans.
One of the reasons this is the first such study is that these datasets are proprietary and not publicly available to researchers. “We went to a credit bureau and basically had to pay them a lot of money to do it,” says Blattner.
Noisy data
They then experimented with different predictive algorithms to show that credit scores weren’t just biased but “noisy,” a statistical term for data that can’t be used to make accurate predictions. Take a minority candidate with a credit score of 620. In a biased system, we might expect that this score would always overestimate that candidate’s risk and that a more accurate score would be 625, for example. In theory, this bias could then be explained by some form of algorithmic positive action, such as lowering the threshold for approval of minority applications.