The Unhappy Fit of Algorithms with DEI

Algorithms are being targeted especially as they relate to DEI.

The hodge podge of abbreviations has infected commercial discourse and now includes the heralded government commitment to the pursuits of DEI or D, E, and I: discrimination, equity, and inclusion (not to be confused with Dei as in Agnus Dei).

As a corollary of this commitment, disparate impact is a favored tool, regardless of its legal sufficiency or relevance.

A Fictional Tale
Darby Finnegan, an arbitrageur, founded the Dara Knot Company, a futures and commodities transactional organization. Dara Knot is based in the District of Columbia. Finnegan needed highly experienced, sophisticated, computer science professionals and instructed his Human Resource Department to seek these candidates by executing an exacting search for such people. The proposed career opportunity was quite exacting and specific. In response to this description, the candidate search was subjected to an artificial intelligence search based upon the algorithms of the important characteristics sought in the potential candidates. The search yielded several white males who matched the criteria closely. The search had no limitations based upon discrete and insular discriminatory criteria such as race or gender.

Could such a search be construed to be a violation of the law? Astoundingly, the answer could be yes if a pending bill in the District of Columbia is passed. It is Attorney General Karol A. Racine’s Stop Discrimination by Algorithms Act.

What Are Algorithms – Some Assembly Required
An algorithm is simply a process, or set of rules, to be followed for a problem-solving operation. Following the printed assembly directions to construct a pre-packaged propane grill, or the process for doing the laundry, are examples of algorithms. In the computer world, using a search engine is an algorithm. Using a computer program to qualify people for financing, health care, insurance, or employment are examples as well.

Star Trek Thinking
Related to computer algorithmic calculations, are machine learning (ML) and artificial intelligence (AI). AI has two levels: Narrow AI (NAI) and General GAI.

AI is a broad term and makes it possible for machines to learn from experience through various forms of software. It adjusts to new inputs to perform tasks such as driving a vehicle without human intervention.

ML is a subset of AI and utilizes computer algorithms, improving its efficiency automatically through experience and by the use of data.

NAI is a collection of technologies that rely on algorithms and programmatic responses to simulate intelligence. It is like a match game where the human request is made and the software matches it. No true analysis takes place.

However, GAI is intended to emulate human analysis and reason, to think on its own. The goal of GAI research is to engineer AI that learns in a manner that matches or surpasses human intelligence.

The Stop Discrimination by Algorithms Act would appear to be an attempt at disciplining AI, ML, NAI, and GAI, based upon various commentators’ comments.

The Proposed Act
The proposed act, Stop Discrimination by Algorithms Act, aspires to prevent computer-utilized algorithms to advance discrimination, and has three major components.

Quoting from Attorney General Racine’s letter to the D.C. Council Chairman, these components are explained:

“This Bill would combat these problems and set baseline standards of fairness by requiring entities that make algorithmic decisions about important life opportunities to:
• stop the discriminatory use of traits like race, sex, and disability in automated decisions about employment, housing, education, and public accommodations;
• audit algorithms for discriminatory patterns and report the results and any corrective actions to the Office of the Attorney General; and
• disclose and explain when algorithms negatively affect a consumer’s opportunities.”

Bold Proclamations
D.C. Attorney General Racine made this bold statement regarding the proposed act:

“Not surprisingly, algorithmic decision-making computer programs have been convincingly proven to replicate and, worse, exacerbate racial and other illegal bias in critical services that all residents of the United States require to function in our treasured capitalistic society.”

Numerous organizations leapt to the opportunity to support this proposed act, citing various societal ills where there are discrepancies in economic success between various groups. These organizations appear to want to use this law as a tool to advance equity and inclusion, not necessarily only equality.

Does this Law Add Much?
With the torrent of discrimination cases, and passed state and federal legislation, one would have to wonder if companies would utilize AI and algorithms to produce a discriminatory result. Should they do so, the legal risk is daunting. Civil rights laws certainly apply to decisions regarding these issues regardless of the methodology employed.

The examples cited in Racine’s letter, to support the act, do not seem to lend robust credence to his arguments. For example, one of the examples for alleged faulty AI and algorithms produced candidates who seemed to match the profile of successful associates at a particular company. The program didn’t appear to discriminate. In another example, a result was merely stated without establishing a cause and effect in medical care. In a third example, a social media company allowed advertisers to decide who gets their housing-related advertisements based on the users’ race, sex, ZIP code, and other characteristics. This company is being prosecuted by HUD. (In this third case, one has to ask the question: How is this practice any different than selecting certain magazines, radio stations, or newspapers to advertise homes for sale?)

The use of AI and algorithms would merely seem to be a method in these cases and not a prohibited practice unto itself. Once again, if these practices are illegal it’s not the AI and the algorithms which are the illegality. Federal law and the District of Columbia’s strong anti-discrimination laws would certainly apply regardless of the method. This proposed law appears to be unnecessary and duplicative of current law. But it could be seen as a means to regulate by enforcement, producing an overreaction by business.

The Pursuit of Equity and Inclusion
Businesses don’t want to be prosecuted. If an algorithm does produce a result, such as in the fictional Dara Knot Company, it could trigger a possible investigation, as the results of the employment search would appear suspicious: no BIPOC candidates. (BIPOC means Black, Indigenous, or People of Color.) To avoid this result, businesses may simply include additional factors to the algorithms to produce the desired government result of equity. If the true purpose of this law is to, surreptitiously, advance equity and inclusion, then it may succeed. Other such laws may be in the planning stages.

Terry O’Loughlin, J.D., M.B.A., director of compliance for The Reynolds and Reynolds Company, has nearly 30 years of legal and regulatory experience in motor vehicle-related fields. From 1989-2006, O’Loughlin served with the Florida Office of the Attorney General, investigating and prosecuting automobile dealers, manufacturers, and financing and leasing companies. He led a task force that examined more than 100,000 motor vehicle files and settled with over 1,600 vehicle dealers for more than $15,000,000.00. O’Loughlin helped to draft and served as mediator of Florida’s Motor Vehicle Lease Disclosure Act. He has served as a consultant to the Federal Reserve Board’s Leasing Education Committee and has routinely advised numerous states’ agencies on motor vehicle fraud. Admitted to both the Pennsylvania and Florida Bars, O’Loughlin graduated from the University of Pittsburgh and received his graduate degrees from the University of Dayton.