I recently went to a talk by the famous venture capitalist Kai-Fu Lee in which he described a smart-phone app which would make immediate decisions to lend money to the user.
The app worked by analyzing thousands of things in the phone, and making a decision based on a neural network. The algorithm was created by getting access to thousands of phones — some from people who pay their bills on time, and some from people who don’t — and then using an automated computer procedure to construct a neural network. Doctor Lee does not really understand how the algorithm works to make decisions, and according to Doctor Lee, no one else does either. The algorithm is just a mysterious black box, the innards of which make no sense to anyone, but that works.
I said to Doctor Lee that in my experience in consumer credit, I generally wanted to have a model with which I could explain decisions to potential borrowers potential investors, colleagues, and regulators.
Doctor Lee’s response was that my time has passed and that opaque algorithms are the way of the future.
He may be right — but I am not at all convinced that that is a good thing.
Algorithms for making decisions is becoming common. From Netflix giving people recommendations of what to watch next to large companies using algorithms to analyze resumes (and sometimes just social media profiles) of job seekers to credit card companies, auto loan companies, and mortgage lenders using algorithms to make decisions, our lives are in many ways ruled by algorithms.
But with many of them working in ways we don’t quite understand, do we really want our decisions to be made by something that is beyond understanding by any actual human being?
The problem with these black boxes is that there are so many variables being put into these algorithms that have outside factors aren’t being taken into account. This not only leaves existing inequities and injustices in place, in many cases it actually exacerbates them.
- Having an internship probably does something useful, but does an internship (usually unpaid) indicate something about an applicant, or something about the applicant’s parents being able to support him or her financially as he or she works without pay for a summer?
- It may be true that mortgage foreclosures were much higher in some zip codes than in other over the past fifteen years — but does that indicate something about the people now trying to buy homes, or something about the price bubble in 2006?
- Some people have indeed accumulated debts that they can not pay, but sometimes that indicates not a defect in their character, but that they live in municipalities that are funded from penalties and late fees assessed against the less politically powerful residents.
In each of those cases a computer may be making the mathematically correct decision by deciding against hiring or offering a loan to someone with a less privileged background, but do those factors really reflect that person’s capabilities, or the circumstances they had experienced? Even worse, in many cases there is no way to test these factors independently for bias because of how opaque the actual mechanism of the algorithms are.
We know that algorithms can be used to institutionalize and credentialize injustices, but just recently the Trump administration proposed new rules to allow mortgage lenders to be exempt from claims of racial discrimination if they use algorithms.
This is absurd. Using a computer is not a defense against racism. It is simply being racist using a more powerful tool.