Podcast 364: Kareem Saleh of Fairplay

The use of AI underwriting models has become widespread in recent years as lenders look to expand their credit boxes without taking on additional risk. But there has been a lot of talk, particularly in the last year or so, about bias in these lending models. There is a real need to address this bias problem head-on.

My next guest on the Fintech One-on-One Podcast is Kareem Saleh, the CEO and Founder of Fairplay. Their mission is to bring fairness to automated decisioning models in something they call Fairness-as-a-Service.

In this podcast you will learn:

  • Why Fairplay’s mission is personal for Kareem.
  • The origin story of Fairplay.
  • The state of fairness in underwriting models today.
  • Why there is still bias in assessing credit.
  • A detailed explanation as to how they add a fairness aspect to AI models.
  • How they define fairness.
  • How fairness differs between lending verticals.
  • How they created their Algorithmic Bias map on their home page.
  • Why fairness through blindness hasn’t worked.
  • Some of the lenders they are working with today.
  • Why lenders are more receptive today than they were a year or two ago.
  • The feedback he is getting from regulatory agencies.
  • What we should be doing as an industry to engage lawmakers in Washington.
  • How he was able to build such an impressive group of advisors.
  • Why this is about more lending.

You can subscribe to the Fintech One on One Podcast via Apple Podcasts or Spotify. To listen to this podcast episode there is an audio player directly above or you can download the MP3 file here.

Download a PDF of the Transcription or Read it Below

Welcome to the Fintech One-on-One Podcast, Episode No. 364. This is your host, Peter Renton, Chairman and Co-Founder of LendIt Fintech.

(music)

Before we get started, I want to talk about the 10th Annual LendIt Fintech USA event. We are so excited to be back in the financial capital of the world, New York City, in-person, on May 25th and 26th. It feels like fintech is on fire right now with so much change happening and we’ll be distilling all that for you at New York’s biggest fintech event of the year. We have our best line-up of keynote speakers ever with leaders from many of the most successful fintechs and incumbent banks. This is shaping up to be our biggest event ever as sponsorship support is off the charts. You know, you need to be there so find out more and register at lendit.com

Peter Renton: Today on the show, I’m delighted to welcome Kareem Saleh, he is the CEO and Founder of FairPlay. Now, FairPlay is a really interesting company, they’re doing something that really hasn’t been done before, they’re taking AI-based underwriting models and adding a fairness layer to them so they call this Fairness-as-a-Service, we obviously describe that in some depth. Kareem sort of has a really interesting background and founding story about how this all came about, but, you know, the reality is lending today is still unfair for many, many populations and he has a fantastic map on his website we talk about, we talk about how they’re addressing it. 

What are the things they do to make sure that fairness is really front and center while at the same time, you know, protecting performance and so we go into that in some depth. He talks about what’s been happening in government lately and how they are interacting with government, he talks about his wonderful group of advisors which have a whole bunch of rock stars on there and much more. It was a fascinating episode, hope you enjoy the show.

Welcome to the podcast, Kareem!

Kareem Saleh: Thanks for having me, Peter, delighted to be here.

Peter: Okay, great to have you. So, let’s get things started by giving the listeners a little bit of background about yourself. You’ve had some interesting things you’ve done in your career to date, why don’t you give us some of the highlights.

Kareem: So, I’ve been working on underwriting hard-to-score borrowers my whole life. At a very young age, actually, as a result of my family’s own experience with lending. My parents are immigrants from Africa, they moved to the States in the early ’70s and it’s a kind of classic American immigrant story, you know, they were highly educated in their home countries, they spoke the Queen’s English, moved to America, needed a modest loan to start a small business and couldn’t get one and that actually had very profound effects on our family and it kind of struck me that from a very young age credit is the sine qua non of modern life. 

So, I started working on this question of how to underwrite hard-to-score borrowers, how to underwrite under conditions of deep uncertainty, people with thin files, no files, some kind of credit event in their past. I got started with networking, frontier emerging markets, Sub-Saharan Africa, Eastern Europe, Latin America, the Caribbean and then spent a few years at an unfortunately named mobile wallet startup called Isis that was mercifully rebranded Subcard and sold to Google and then for several years got in the US government, at the State Department and at the Overseas Private Investment Corporation.

That gave me visibility into the underwriting practices of some of the most prestigious financial institutions in the world and what I was quite surprised to find was that even at the commanding heights of global finance the underwriting methodologies, at least at legacy institutions, were still quite primitive and almost all of the decisioning systems used in financial services exhibited disparities, you know, against people of color, women, other historically disadvantaged groups and it’s not because the people who built those models are people of bad faith, it’s largely due to limitations in data and mathematics.

Peter: Right.

Kareem: But part of what interested me about that was, you know, ostenibly we have a legal regime in the US that prohibits that kind of discrimination. And so, I started wondering a little bit about how is it possible that on the one hand discrimination is illegal and on the other hand, it’s ubiquitous and that was the aggravating question behind the founding of our company, FairPlay, which is what brings me here today.

Peter: Right. Was it some sort of catalyst to launching the company? I know you were working at another company in the space beforehand, what was sort of the catalyst to start FairPlay?

Kareem: As somebody who’s interested in underwriting, one of the things that me and colleagues make a practice of doing is reviewing advances in the academic literature to see if there are new mathematical techniques that might, you know, give us or our customers an underwriting edge. And so, about three or four years ago, we started to see the emergence largely from bases like Carnegie Mellon and Stanford of new mathematical techniques that are sometimes called AI fairness techniques. 

These are techniques that are designed, you know, to do a better job of underwriting populations that are not well represented in the data. And so, about four years ago as we were reviewing these papers, we persuaded a major mortgage originator to work with us to do a pilot to apply of these new AI fairness techniques to their mortgage and underwriting and we were surprised to find that that mortgage originator could have increased their approval rate for Black applicants by something on the order of like 10% without any other corresponding increase in risk. We found that these new AI fairness techniques had great potential to enhance inclusion and yet virtually nobody was using them in financial services and then it was around that time, shortly thereafter, that we all witnessed the murder of George Floyd. 

So, in the summer of 2020, as the Black Lives Matter Movement was sweeping across the country and there were protests in the streets over the murder of George Floyd, you know, me and my Co-Founder, John, started asking ourselves, as I think many people in the industry did, you know, what can we do to ameliorate systemic racism in financial services. Our conclusion was that we could bring these new AI fairness techniques to what we call Fairness-as-a-Service to allow financial institutions to de-bias their digital positions in real-time and so that was the kind of backdrop that animated the founding of FairPlay.

Peter: Got you, okay. So then, let’s just take a look at, you know, people would say that we’ve made a lot of strides in underwriting in the last 10 or 15 years, we’ve seen the fintech lenders kind of come to the fore with new underwriting models. Do you think it’s fairer today, like even some of the AI-based underwriting models that are out there today, is it fairer today than it was say 15 years ago when most underwriting models sort of had a human component. What do you think is the sort of state-of-play today?

Kareem: You know, Peter, the answer is very murky, right, because you can see this most early in the mortgage market. So, you know, Black home ownership rate is the same as it was at the time of the passing of the Fair Housing Act, you know 50 years ago, right. So, 50 years ago, we had judgmental underwriters making decisions about who to approve for loans, those decisions are made by algorithms today and yet the Black home ownership rate hasn’t increased at all. So, I would say, at least, in the mortgage industry there is a compelling argument to be made that the algorithms that are being used in underwriting are encoding the biases of the past. 

Now, certainly, there have been a number of advances in underwriting in other parts of the consumer finance market, you have now, for example, installment lenders who are doing a better job of using alternative data, cash flow underwriting data, for example, which is supportive of financial inclusion. The question then iis though, you know, okay, maybe that data is helping folks get approved for loans, but are those loans being priced fairly. 

So, I would say, the question around whether or not the move to algorithmic and automated underwriting has been supportive of financial inclusion is somewhat murky. In some asset classes like mortgage clearly it hasn’t had the effect that we hoped it would and in other asset classes like installment loans, I think we see approval rates going up in ways that are supportive of inclusion, but I worry that there might be pricing and collections unfairness that we have yet to fully uncover.

Peter: Right. So, is the crux of the problem, is it a data problem that we don’t have enough data or the right data or is it sort of a model problem, methodology and how we analyze that data is wrong or is it a combo?

Kareem: I think it’s both, Peter, I mean, certainly it is a data bias problem, right. If you take Black Americans who were historically excluded from the financial system, there is just not enough data about their performance in order for us to be able to make reasonable conclusions about their credit worthiness and that’s in part because to the extent that they were never included in the financial system, they are often either gouged or steered towards predatory products. 

So, certainly, data bias is a substantial problem we have to contend with as an industry, but it’s not just data, right, I mean, part of the biases intrinsic to the mass methodologies that are being used too. Let me just give you one example. Almost all AI algorithms must be given a target, an objective, a thing that they seek to relentlessly maximize and for credit algorithms it predicts who is going to default, but if you take a step back and think about it for a minute, giving an algorithm a single-minded objective might cause it to pursue that objective without regard to other harms that it might create in the process. 

So, let’s just take that Facebook social media algorithm as an analogy. You know, the Facebook social media algorithm is known to prioritize engagements, it will relentlessly seek to keep you engaged regardless of whether or not the stuff it is showing you to keep you engaged is bad for your mental health or bad for society, right. If you think about self-driving cars, if Tesla gave itself that in cars, the single-minded objective of getting a passenger from Point A to Point B, the self-driving car might do that while driving the wrong way down a one-way street, while blowing through red lights, while causing other mayhem along the way. 

So, what does Tesla do? Tesla gives the neuro networks which power it’s self-driving car systems two objectives, get the passenger from Point A to Point B while respecting the rules of the road and we can do that in financial services. That’s what we’ve done at FairPlay, we’ve taken a page from the Tesla playbook and built algorithms with two targets, predict who is going to default accurately while also minimizing disparities for protected groups and the good news is, it works. When you give these algorithms an additional priority, there’s a lot of low-hanging fruit in terms of unfairness that they can find to remedy without sacrificing accuracy. 

Peter: Okay. So, you’re basically taking existing….because, you know, there have been a lot of sophisticated models that have been created, AI models that are maximizing for overall return which is really maximizing for no defaults and you’re adding a fairness piece. So, maybe can you sort of dig into that a little bit more, like how do you do that?

Kareem: So, in our case, what we have done is to treat unfairness for various protected groups as another form of battle error. So, typically, when you are constructing a model, especially a machine learning model, you use something that’s called a loss function and the loss function is like tells a model if it is learning correctly and it penalizes a model when it learns the wrong stuff. So, for example, if during a trading process, a model starts improving a bunch of applicants who would have defaulted, the loss function sends a message back to the model and expresses that the model must re-visit it’s reasoning because it is not doing a good job of underwriting those applicants, it is making errors in it’s reasoning when underwriting applicants. 

So, what we do is we modify a loss function to include a fairness term so that during model development, as the model is learning who will pay back the loans and who will not, it does so with some sensitivity to the fact that there are historically disadvantaged populations that might not be well represented in the data and that model should do its best to make sure before it discards one of those applicants that they don’t resemble good applicants who would have paid back their loans on some dimension that the model did not heavily consider. Let me unpack that a little bit.

Peter: Okay.

Kareem: One variable that we often see in credit algorithms is what is the consistency of the applicant’s employment. And if you think about it, consistency of employment is a very reasonable variable on which to assess the credit worthiness of a man, but consistency of employment will always discriminate against women between the ages of 20 to 45 who take time out of the workforce to have families. So, what we do is we train the models so that when they encounter somebody that they are about to decline for inconsistent employment, rather than let that inconsistent employment be outcome determinative, the model should run a check to see if that applicant resembles good applicants on other dimensions. 

Have they ever declared bankruptcy, how desperately are they appear to be seeking credit, do they have strong stability of residence, you know, is the number of professional licenses they have increasing. There might be all of these other dimensions on which the applicant resembles good borrowers not withstanding the fact that they might have the occasional gap in their employment. Does that make sense?

Peter: Yes. So, when you say fairness, you are not just saying racial fairness, you’re saying different types of fairness, is that fair to say?

Kareem: Yes. So, technically under the Equal Credit Opportunity Act, the government banks advises six protective classes and you can be protected on the basis of race, age, gender, national origin, disability, marital status, whether or not you’re a service member. We define fairness broadly to include all of the protected classes under the law, but then other folks who might not technically be protected, but at least deserve a fair shot so for example, thin or no-file applicants.

Peter: Got you, got you, okay So, what you’re doing is you’re taking your model and you’re really optimizing, I can see the more the one thing is like got to the Tesla kind of analogy you made, like Tesla could optimize getting A to B as quickly as possible and it could endanger people so you’re adding these other pieces. The goal is still to get from A to B, the goal is still find good borrowers who will not default and who will perform well, but are you also taking into account, like you mentioned before pricing, is that part of what you’re doing or is that beyond your purview?

Kareem: It is. So, we believe that the law requires that various decisions across the customer journey must be made fairly. So, if you think about the customer journey of lending, the marketing decision must be made fairly, fraud detection situations must be made fairly, underwriting decisions must be made fairly, pricing decisions must be made fairly, account management decisions, things like line assignments for credit cards must be made fairly and, of course, collections must be done fairly. So, we think that there are a number of high stake decisions across the consumer lending journey where there are fair decisions and those decisions must be made fairly.

Peter: Got you, okay. And so, you mentioned differences between like home loans and personal loans, are there big differences in the different lending verticals? I’m thinking of, you know, credit cards, there’s auto loans, there’s student, obviously there’s mortgages, what are you seeing there?

Kareem: What we observe is that there are various issues of different kinds in almost every consumer credit vertical so we talked a bit about mortgage and mortgage you can see nationally blackout regions are denied at twice the rate of white applicants. So, I would say in the mortgage market we have an approval rate problem, but in the auto loan market virtually nobody is denied, almost everybody is approved for auto loans. 

The question is what are the terms that you are offering on those auto loans and as we know, there has been some concern, for example, about things like yield spread premiums and dealer markups and so in auto I think the question isn’t an approval fairness issue, now it’s a pricing fairness issue. We see the potential for fairness risks in almost every consumer credit product we touch, it’s just that they may be at different stages of the customer journey.

Peter: Right, right. I want to talk about this fairness that you have on your website, there’s a tool that you have. Actually, I went down to my city that I live in and looked in and you actually isolate some of the areas within the city that are more fair and less fair and it’s a really interesting tool. In looking at my city, I can see the areas where it was picking up things that I feel like in reality on the ground. What are you using to create that tool?

Kareem: As a result of the Home Mortgage Disclosure Act, every mortgage originator in the country is actually required to submit certain loan level information to the government every year and the government, in theory, makes that data available to the public in something called the Home Mortgage Disclosure Act database so that the public can understand if a particular lender is engaging in let’s say red lining. But, of course, it’s the US government so they make that data available in like the least helpful format possible (Peter laughs). 

Mortgage Fairness Act started as an internal tool, the best product it seems always start as internal tools because we would be getting ready to give to prospective clients and I would want to understand something about their fairness before it falls and I would say to my team, can we please go pull the mortgage records from the HMDA database for this particular lender and I would get sent a spreadsheet that I could neither make heads or tails of. 

And so, after about five or six episodes like this I had a minor nervous breakdown (Peter laughs) and I asked my team, I told my team, I want you to go just scrape all of the data in the Home Mortgage Disclosure Act database going back several years. And I want us to build an interactive map that represents the state of mortgage fairness in America, as you point out, all the way down to the census track level, to the block curve level. I want to understand block by block what is the state of mortgage fairness in America. 

As you can see on the website www.fairplay.ai, the state of mortgage fairness in America for Black and Native American applicants, in particular, is quite bleak and one of the kind of disheartening things about the map is they do mortgage fairness for like Hispanic Americans. Hispanic Americans tend to be approved like 85% the rate of White Americans compared to 75% the rate for Black Americans, but what’s interesting about the Hispanic map is the more Hispanic your neighborhood, the fairer the mortgage market is to Hispanics. 

So, if you look at Southern California, Southern Florida where Hispanic populations predominate, their mortgage market is really fair to Hispanics. What caused all of our stomachs to turn was if you look at the Black and the Native American maps we observe the opposite effect which is to say the blacker your neighborhood or the more likely you are to be on an Indian Reservation, the less fair the mortgage market is to those groups.

Peter: It’s staggering to see, you’ve got it on your Home Page here and like some southern states have almost their entire state that is blanketed as unfair, that’s really quite staggering.

Kareem: The message that all of these drives home to me is that those lending decisions used to be made by humans, they are now being made by algorithms and the algorithms appear to be replicating the disparity of the past.

Peter: Right.

Kareem: In many parts of the American mortgage market, and this is a controversial statement, Black applicants are treated as if there was still a three-fifths clause in the constitution.

Peter: Wow. Well, it’s great to identify and as they say, awareness is always the first step, right, so….

Kareem: It’s funny that you use the word awareness because one of the things we say at FairPlay is for the last 50 years we have tried to achieve fairness in lending through blindness, this idea that if we blinded ourselves to protect the status of the applicant, that we can rely on variables that are “neutral” and objective to increase positive outcomes for those historically disadvantaged groups. 

And as I think I’ve said earlier on the pod, the Black home ownership rates today is exactly what it was 50 years ago so fairness through blindness hasn’t worked. One of the things that we allocate at FairPlay is what we call Fairness Through Awareness which is precisely what you said, right, this is the idea that maybe we ought to be using information about these historically disadvantaged groups in a privacy, personally and responsible way to see if we might not do a better job of bringing them into the financial system.

Peter: Right. So, you’ve been around for less than two years, but I’d love to get a sense of, can you tell us some of the lenders that you’re working with today>

Kareem: I’m happy to report that since our founding I think we have fast emerged as the default fair lending solution for the fintech industry. American Banker has reported that both Figure Technologies and Happy Money are FairPlay customers and I’m delighted to report that we’re going to have several other big names that we’re going to be able to announce publicly soon, but the good news is we’re growing very fast and we’re seeing rapid adoption across the fintech industry by folks in mortgage, auto, power sports finance, credit cards, installment device lending. You name the consumer credit vertical and there’s a good chance there’s a FairPlay customer there using our software to optimize their decisioning systems to be fairer.

Peter: That’s great, that’s great because look at some of the things that are coming out of the CFPB in the last couple of months and they’ve been talking, I mean, I think it was just last week where Director Chopra was testifying in front of the House and the Senate talking about…one of the things they talked about, many things was being tough on these AI lending models, algorithmic lending, it doesn’t seem like he’s a big fan. Has that made the conversations you’re having with your customers like with the lenders, are they more receptive to your message now than they were say a year ago?

Kareem: I think so, I think everybody who pays attention to developments in Washington can see there is a new sheriff in town, fintech is in the close airs and there was a perception that the algorithms that are being used in the industry leads to the wrong devices will discriminate against historically disadvantaged groups. One of the things that we hear a lot from these stages, hey, we know that we need to do a better job on fairness because the customers that represent the future of our business and the regulators are increasingly demanding fairness from the brands that they patronize.

Peter: Right, makes sense. So then, I imagine you’re talking with government agencies yourselves, I mean, what is the feedback you’re getting from your conversations with government?

Kareem: We think that the regulators and policy makers, both at the federal and the state levels, are really trying to get up the curve and enhance their understanding of AI technologies and what their potential promise and the associated risks might be. And so, we have taken it as kind of a competitive, a mandate, frankly, to maintain active and intensive dialogue with the regulators on issues related to AI technologies and their governance and their fairness. 

So, one of the things that we have done in that regard is publish a piece recently with the Brookings Institution setting forth what we call an AI Fair Lending Policy agenda for the federal financial regulators and setting forth what we believe to be the right ways to harness AI systems so that they can produce positive and inclusive outcomes in financial services and guard against the potential risk of those technologies.

Peter: Right, right. Do you think we should be doing more as an industry because clearly, there’s risk right now with anyone using….and I imagine you’re probably a little different, I’m guessing, than most of the other companies out there that really are touting AI lending because you’ve got such a focus on fairness, you’re probably going to get through it a little differently, but shouldn’t we be doing more as an industry to educate policy makers in Washington?

Kareem: I think so. I think that companies ought to be meeting with regulators to explain the technologies they’re using and the steps that they are taking to ensure that those technologies do not pose a threat either to the consumers they serve or to the safety and soundness of the financial system. I have some sympathy for the folks in the government because the technologies have evolved and changed so rapidly that if you’re not inside one of these lenders then it is hard to keep your finger on the pulse of what’s happening out there in the market. 

And so, we were meeting with the lenders a few weeks ago who said, you know, this seems like a regulatory solution, I don’t even know who the name of my regulator is and it occurred to me that, you know, it will be quite unfortunate for that lender to learn the name of their regulator for the first time sitting across the table from them facing down the barrel of enforcement action, right. At that point, I think that lender will have wished that they had been more proactive about explaining what it is they’re doing and why they feel it’s appropriate and the steps they’ve taken to ensure it’s safety.

Peter: Right, right. I was telling you this before we hit record here and, you know, you really have a fantastic group of advisors. I’ve had no less than four people reach out to me in the last couple of months telling me that I’ve got to get you on the podcast so here we are. You list some of the people on your website, but how have you been able to build such an impressive group of advisors?

Kareem: We have always believed that in order to use these more as their AI systems in a regulated industry like financial services that the regulators will have to be with us, if they weren’t with us on take-off, they weren’t going to be with us on landing. We made it a point very, very early on, even prior to the founding of the company, to maintain an active and intensive dialogue with both current and former regulators, especially those who have lived at the cutting edge of where regulation and technology interact. 

And so, we have been extremely fortunate over the years to build relationships with folks like David Silberman, the long hand number two at the CFPB, Mary Alvarez who in addition to being the General Counsel at Affirm was also the California Commissioner of Financial Institutions, Dan Quan who led the Innovation Office at the CFPB and is responsible for the first ever no-action letter issued by CFPB, blessing the use of AI in loan underwriting and recently joined the Technology Council at the Alliance for Innovative Regulation run by Jo Ann Barefoot who herself is a long-time OCC senior official and I think has done some of the finest work really at the cutting edge of regulation and technology.

Peter: Yes, some of those are the people who have actually reached out to me, a shout out to Dan Quan who was the first person to tell me that I needed to check you guys out. So, anyway, I want to talk about the future here and you talk about your mission is to build fairness infrastructure for the Internet so that’s a little bit broader than just the lending space so let’s talk about where you want to take FairPlay.

Kareem: Yeah, Peter, our view is that this algorithm takes over higher and higher stakes positions in people’s lives that they need to device those digital positions in something approaching real-time will be essential. And so, we think that our software was developed to allow anybody using an algorithm to make a high stakes decision about someone’s life to answer five questions. 

Is my algorithm fair? If not, why not? Could it be fairer? What’s the economic impact to our business of being fairer? And finally, did we give our declines, the folks we rejected, a Second Look to see if they might resemble good applicants on dimensions that the primary decisioning system didn’t heavily take into account. 

Our tool is a tool of general applicability so, yes, we’ve got the market in financial services as the domain we understand best and there’s a regulatory regime there that is supportive of fairness, but we are also making headway in industries like insurance, employment, government services and even marketing. So, our view is that just as Google, the search infrastructure for the Internet, and just as Stripe built payments infrastructure for the Internet, so too will we build fairness infrastructure for the Internet.

Peter: Okay. Well, if you can build a company the size of those and the influence of those two you’re going to be doing really, really well for yourself. So, Kareem, thank you very much for coming on the show, best of luck, it’s a noble mission, you really are breaking new ground here so congratulations on your success to date.

Kareem: Thank you for having me, Peter, I’ve enjoyed the conversation and I believe that LendIt is the must attend conference for the fintech industry.

Peter: Okay, here, here! Thanks. Thanks, again, see you.

You know, it seems staggering to me that here we are in 2022 where technology hasn’t really solved this sort of discrimination and lack of fairness with many people experiencing in our financial system. So, really commend Kareem and FairPlay for doing something new, this is something that I think is needed and, you know, you can see by some of the companies that are using FairPlay, and there’s more that are going to be coming down and announced soon, that this is something that is sorely needed by the industry. I think, you know, it’s going to help us dramatically in conversations we have with regulators and regardless, it’s the right thing to do and that’s sort of what I came out of this conversation with.

Anyway on that note, I will sign off. I very much appreciate you listening and I’ll catch you next time. Bye.

(music)

  • Peter Renton

    Peter Renton is the chairman and co-founder of Fintech Nexus, the world’s largest digital media company focused on fintech. Peter has been writing about fintech since 2010 and he is the author and creator of the Fintech One-on-One Podcast, the first and longest-running fintech interview series.