Earlier this week, CNBC expressed concerns regarding artificial intelligence:
Fed banking regulator warns A.I. could lead to illegal lending practices like excluding minorities
The 21st century is fast approaching the quarter mark. With the emergence of accessible, increasingly popular A.I. tools, it wouldn’t be the worst wager that the growth, development, and eventual ubiquitousness of A.I. is all but inevitable. It will be fascinating to witness how A.I. affects change in various industries, especially the financial sector and Hollywood.
CNBC’s warning came from a speech given the same day by Vice chair for Supervision at the Fed, Michael S. Barr, titled Furthering the Vision of the Fair Housing Act:
The digital economy has produced alternative data sources, some of which can provide a window into the creditworthiness of an individual who does not have a standard credit history.
So far so good. With a relatively low cost, machine learning may find new ways to assist those struggling to find credit. However, he goes on to say:
While these technologies have enormous potential, they also carry risks of violating fair lending laws and perpetuating the very disparities that they have the potential to address.
Bad input leading to poor outputs is of concern. Worst yet, fundamental problems can exist in the system itself:
Use of machine learning or other artificial intelligence may perpetuate or even amplify bias or inaccuracies inherent in the data used to train the system or make incorrect predictions if that data set is incomplete or nonrepresentative.
He provided an example:
For instance, digital redlining in marketing—the use of criteria to exclude majority-minority communities or minority applications—is one risk…
That is certainly possible. One would expect that in a credit report, past and current employment and financial history would factor into one’s assessment, not one’s race.
Ultimately, the use of A.I. should be embraced for its potential to save both time and money. While it may be employed to assist loan officers in credit applications, it could lead to redlining practices. Defining these practices and proving their occurrence could prove a costly challenge to federal regulators who likely don’t understand the technology themselves. We’re not yet at the stage where a nefarious A.I. can take blame for our problems. Should that day ever come we’ll have much larger issues at hand!
Nonetheless, in a freer world without a Federal Reserve system responsible for the economic booms and busts, there would be fewer impoverished communities and much less economic disparity. As A.I advances, with no taxpayer funded regulator, A.I.’s potential would help entrepreneurs across the socio-economic spectrum bring valuable products to market.