Fintech Is It Time To Start Using Race And Gender To Combat Bias In Lending? Kareem Saleh Contributor Opinions expressed by Forbes Contributors are their own. Founder & CEO, FairPlay New! Follow this author to stay notified about their latest stories. Got it! Aug 10, 2022, 09:21am EDT | Share to Facebook Share to Twitter Share to Linkedin Trying to achieve fairness through blindness has not worked.
© Igor Zubkov | Dreamstime. com A woman, let’s call her Lisa, applies for a loan. She’s 35 with a graduate degree, a high earning trajectory and a 670 credit score.
She also just returned to work after taking time off to start a family. Her application goes to an algorithm, which assesses her risk profile to determine whether she should be approved. The algorithm sees her recent gap in employment and labels her a “risky” borrower.
The result? Her application is rejected. Examples like this happen every day in lending. Are these decisions fair? When it comes to fairness in lending, a cardinal rule is, “Thou shalt not use variables like race, gender or age when deciding whether to approve someone for a loan.
” This rule dates back to the Equal Credit Opportunity Act (ECOA), passed in 1974 to stop lenders from deliberately denying loans to Black applicants and segregating neighborhoods—a practice called redlining. The problem got so bad, the government had to ban the consideration of race or gender when making loan approval or other high-stakes decisions. The assumption behind ECOA was that if decision makers—be they humans or machines—are unaware of attributes like race or gender at decision-time, then the actions they take will be based on “neutral” and “objective” factors that are fair.
Recommended For You 1 Shares Of Money-Transfer Fintech Remitly Rise 13% In IPO, Valuing It At $7. 8 Billion More stories like this Fewer stories like this 2 CDP – Environmental Disclosure Platform Probing The E in ESG More stories like this Fewer stories like this 3 Digital Identity Should Be A Big Business For Banks More stories like this Fewer stories like this There’s just one problem with this assumption: It’s wishful thinking to assume that keeping algorithms blind to protected characteristics means the algorithms won’t discriminate. In fact, building models that are “blind” to protected status information may reinforce pre-existing biases in the data.
As legal scholar Pauline Kim observed : “Simply blinding a model to sensitive characteristics like race or sex will not prevent these tools from having discriminatory effects. Not only can biased outcomes still occur, but discarding demographic information makes bias harder to detect, and, in some cases, could make it worse. ” In a credit market where Black applicants are often denied at twice the rate of White applicants and pay higher interest rates despite strong credit performance, the time has come to admit that “Fairness Through Blindness” in lending has failed.
If we want to improve access to credit for historically underrepresented groups, maybe we need to try something different: Fairness Through Awareness, where race, gender and other protected information is available during model training to shape the resulting models to be fairer. Why will Fairness Through Awareness work better? Consider the example of the woman, Lisa, above. Many underwriting models look for consistent employment as a sign of creditworthiness: the longer you’ve been working without a gap, the thinking goes, the more creditworthy you are.
But if Lisa takes time out of the workforce to start a family, lending models that weigh “consistent employment” as a strong criterion will rank her as less creditworthy (all other things being equal) than a man who worked through that period. The result is that Lisa will have a higher chance of being rejected, or approved on worse terms, even if she’s demonstrated in other ways that she’s just as creditworthy as a similar male applicant. Models that make use of protected data during training can prevent this outcome in ways that “race and gender blind” models cannot.
If we train AI models to understand that they will encounter a population of applicants called women, and that women are likely to take time off from the workforce, the model will know in production that someone who takes time off shouldn’t necessarily be deemed riskier. Simply put, different people and groups behave differently. And those differences may not make members of one group less creditworthy than members of another.
If we give algorithms the right data during training, we can teach them more about these differences. This new data helps the model evaluate variables like “consistent employment” in context, and with greater awareness of how to make fairer decisions. Fairness Through Awareness techniques are showing impressive results in healthcare , where “identity-aligned” algorithms tailored to specific patient populations are driving better clinical outcomes for underserved groups.
Lenders using Fairness Through Awareness modeling techniques have also reported encouraging results. In a 2020 study , researchers trained a credit model using information about gender. The gender-specific model resulted in about 80% of women getting higher credit scores than the gender-blind model.
Another study, done by my co-founder John Merrill , found that an installment lender could safely increase its approval rate by 10% while also increasing its fairness (measured in terms of adverse impact ratio) to Black applicants by 16%. The law does not prohibit using data like gender and race during model training—though regulators have never given explicit guidance on the matter. For years lenders have used some consciousness of protected status to avoid discrimination by, say, lowering a credit score approval threshold from 700 to 695 if doing so results in a more demographically balanced portfolio.
In addition, using protected status information is expressly permitted to test models for disparate impact and search for less discriminatory alternatives. Granted, allowing protected data in credit modeling carries some risk. It is illegal to use protected data at decision time, and when lenders are in possession of any protected status information there’s the chance that this data will inappropriately influence a lender’s decisions.
As such, Fairness Through Awareness techniques in model development require safeguards that limit use and preserve privacy. Protected data can be anonymized or encrypted, access to it can be managed by third party specialists , and algorithms can be designed to maximize both fairness and privacy . Fairness Through Blindness has created a delusion that the disparities in American lending are attributable to “neutral” factors found in a credit report.
But studies show again and again that protected status information, if used responsibly, can dramatically increase positive outcomes for historically disadvantaged groups at acceptable levels of risk. We’ve tried to achieve fairness in lending through blindness. It hasn’t worked.
Now it’s time to try Fairness Through Awareness, before the current disparities in American lending become a self-fulfilling prophecy. Follow me on Twitter or LinkedIn . Check out my website .
Kareem Saleh Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/kareemsaleh/2022/08/10/is-it-time-to-start-using-race-and-gender-to-combat-bias-in-lending/