On the first full day of the new presidential administration, the Consumer Financial
Protection Bureau released the Winter 2025 issue of Supervisory Highlights. It was a
special edition on “advanced technologies” used by creditors in credit decisioning
processes, with a focus on avoiding illegal discrimination.
During my initial read, this publication seemed like a potentially useful discussion of
past CFPB guidance on compliance with the Equal Credit Opportunity Act. As the new
leadership has swept into the CFPB, however, we are left to wonder what, if any,
relevance it has to federal policy. Within days, CFPB staffers were ordered to return to
the office, then to stand down from pending work, and then to stay home. The CFPB
headquarters was subsequently locked down, and its lease was canceled. By the end of
the first month of President Trump’s tenure, a search for “CFPB.gov” returned the
stunning message on the CFPB’s homepage of “404: Page not found.”
But with a bit of effort, finding this issue of Supervisory Highlights is possible, for now.
The understandable question in many CFPB watchers’ eyes is whether the guidance was
dead on arrival. As this issue of Spot Delivery goes to press, it seems too soon to know.
At least a couple of reasons exist for creditors to pay close attention to the guidance
and to consider whether and how to address the topics it raises.
- Whether the CFPB survives or not, the ECOA will remain, and some federal agencies will have supervisory and/or enforcement responsibilities.
- The ECOA allows consumers to sue creditors directly for damages and attorneys’ fees, and civil litigation is apt to fill any perceived void left by federal agencies. Additionally, many states are already revving their enforcement engines.
So, let’s look at what this guidance says.
“Automated systems and advanced technology are not an excuse for
lawbreaking behavior.”
Wow. I think the CFPB wanted our attention!
The CFPB, the Department of Justice, and the Federal Trade Commission issued a joint
statement with this headline in April 2023, and the CFPB repeated it in the special
edition. In recent years, there has been no shortage of discussion in the academic and
popular press about algorithmic bias—which refers to biased results in artificial
intelligence instructions that teach a computer how to learn and make predictions from
data. In essence, the concern is that AI can learn and perpetuate human biases.
Although the special edition stopped short of saying that the CFPB has found illegal
discrimination in its supervision of creditors’ use of advanced technologies for credit
decisions, the CFPB believes that many creditors must do more testing of their models.
The special edition provided examples of the CFPB’s concerns from two markets—credit
cards and auto credit—but the views apply generally to credit decisioning models. Using
simple computations, the CFPB found that Black and Hispanic applicants were treated
less favorably in underwriting and pricing of credit products. The CFPB believed that the
compliance management systems some creditors used were deficient in testing for and
correcting harm from illegal discrimination.
Here’s a quick refresher on the legal tests that should guide a creditor’s fair lending
compliance program. When a credit underwriting or pricing system, including a scoring
model, does not consider any prohibited basis, such as race, and does not contain
factors that operate as a “proxy” for a prohibited basis, we can conclude that the model
does not illegally discriminate under a legal theory called “disparate treatment.”
Disparate treatment is the theory used for intentional discrimination. It can be proved
by overt acts, or it can be inferred from a statistical analysis. When the model or system
produces significantly fewer favorable outcomes for a protected group and the creditor
cannot demonstrate that the disparity is the result of legitimate, nondiscriminatory
factors or considerations, the court may infer that the difference must result from illegal
consideration of race or another prohibited basis.
On the other hand, if the model or system considers only factors that are neutral on
their face, a creditor may still be liable under the ECOA under a “disparate impact”
theory, according to the CFPB and other regulators and enforcement agencies. The
disparate impact theory is controversial because it was created by the courts and is
usually not expressly written into antidiscrimination laws. Many legal scholars believe
that the courts should not apply a disparate impact theory under the ECOA because the
plain language of the ECOA requires intent to discriminate: “It shall be unlawful … to
discriminate against any applicant … on the basis of race ….”
Neither the Supreme Court nor federal appellate courts have directly answered the
question of whether disparate impact is a valid theory under the ECOA. However,
because federal agencies (and those of many states) consider disparate impact to be a
viable legal theory under the ECOA, these agencies regularly sue creditors for violating
the ECOA under a disparate impact theory.
Here is how the theory of disparate impact works:
- The plaintiff, usually a government agency, identifies the “neutral” factors that
are directly responsible for unfavorable outcomes for a protected group. These
factors might be an entire credit scoring model or one or more inputs of that
model. The plaintiff demonstrates that, on average, a protected group has a less
favorable outcome due directly to the identified factors. If a plaintiff successfully
meets its burden under Stage 1 of the disparate impact test, the burden shifts to
the creditor to defend its model or system. - In Stage 2, the creditor must show that its model or system is justified by the
creditor’s legitimate business needs, such as avoiding credit losses, increasing
profits, or setting credit terms that appropriately price for risk. This stage of an
profits, or setting credit terms that appropriately price for risk. This stage of an
enforcement action is usually a battle of the experts. The creditor defends its
model, while the plaintiff tries to poke holes in that defense. If the creditor
successfully defends its model under Stage 2, the burden shifts back to the plaintiff
to establish that the creditor could have used a less discriminatory alternative
(LDA) model or system. - In Stage 3, the plaintiff tries to demonstrate that a change in the model or
system, or even a wholly different system, would have met the creditor’s legitimate
business needs just as well but would have been less discriminatory in its net
effects. The LDA does not need to eliminate unequal outcomes, just reduce them.
If a plaintiff meets its burden under Stage 3, the creditor has engaged in illegal
discrimination under a disparate impact theory.
Historically, few if any investigations or court cases made it to Stage 3. In most cases,
the creditor settled the charges, or the government closed the investigation or inquiry
without charges because it was satisfied with the creditor’s business justification.
Moreover, finding a less discriminatory alternative was usually like hunting for a needle
in a haystack. But that is no longer the case. Now searching for an LDA is easy with the
right tools. Using sophisticated “debiasing” software, both creditors and the government
can generate hundreds or even thousands of similar models and can compare both their
effectiveness and their degree of disparate impact with the performance of the
challenged model.
And that’s what the CFPB said it did. Exam teams “identified potential alternative
models to the institution’s credit scoring models.” The potential alternatives “appeared
able to meaningfully reduce disparities while maintaining comparable predictive
performance as the institution’s original models.”
This development is a game changer for creditors, albeit one that has been coming for
some time. The CFPB (and other agencies) can search for—and sometimes identify—LDA
models, and the CFPB believes creditors should also be searching for LDAs.
In addition, the CFPB warned that using alternative data—that is, data not typically
found in either credit reports from the national bureaus or information consumers
typically provide on credit applications—”may present greater consumer protection risks
and warrant more robust compliance management.” The federal bank regulators
acknowledge that some alternative data that can inform credit decisioning and pricing
may benefit both creditors and consumers. They are not discouraging the use of
alternative data; they merely warn that creditors must manage the increased fair
lending risk.
The special edition cautioned that using models with a large number of inputs or
including inputs “not directly related to consumers’ finances and how consumers
manage their financial commitments may present great consumer protection risk and
warrant more robust compliance management.”
Most creditors have data science teams who are very good at building predictive credit
models that meet the creditors’ needs, but few teams are experienced in mitigating fair
lending risk, and this is a real concern. The data science teams need specialized fair
lending training and guidance from fair lending compliance experts.
Here are some compliance practices the CFPB expects, especially for creditors using
alternative data or machine-learning credit models:
- Test for model inputs that may act as a proxy for a prohibited basis. A proxy is a factor or group of factors that are logically connected to prohibited bases, such as the relationship between neighborhoods and race or ethnicity. Various statistical tests identify a factor’s risk of being a proxy.
- For inputs that contribute to disparities on a prohibited basis (which is common), document the business justification for using the factors, especially alternative data that lack a clear relationship with creditworthiness. It is not sufficient to simply say that the inputs contributed to the overallaccuracy of the model. The contribution of each input to model accuracy should be quantified, and the business justification for using it should be documented.
- Make the search for LDA models part of every model creation and its periodic review and updating.The creditor should document each alternative model, its accuracy, and its degree of disparate impact, along with the basis for selecting or rejecting it.
The final section of the special edition addressed creditors’ problems with giving
consumers accurate reasons for adverse action—with a special focus on auto creditors.
Auto creditors with credit scoring models using AI or machine learning had not validated
the accuracy of the reasons they gave based on model scores. The guidance reminded
readers that the reasons must “accurately describe the factors actually considered.”
At first blush, this guidance makes sense. For example, a creditor should not give a
reason for a model score decline that is unrelated to factors in the model. But, in
practice, compliance when using a machine-learning model can be challenging. For
example, a machine-learning model might use many inputs that are closely related,
such as the number of late payments on installment credit and the severity of those
delinquencies. In selecting the principal reasons for adverse action, should the creditor
assess each of the hundreds of inputs separately or combine similar ones?
Each separate input is likely to have only a minuscule impact on the applicant’s score.
When the creditor ranks the impact of each reason on a decision, this approach risks
giving a misleading impression, elevating reasons with little impact over factors that,
collectively, had a major impact on the insufficient score. The method for selecting
adverse action reasons in a machine-learning model requires expert guidance.
The current turmoil in the initial days of the new administration, especially at the CFPB
and to a lesser extent at other federal agencies, makes it hard to predict how vigorously
the agencies will enforce this guidance. But creditors that decide that the guidance can
be safely ignored are probably making a mistake—possibly a costly one. It seems much
wiser to consider now to be a valuable time to review compliance procedures for
building and testing credit scoring models and for selecting accurate model-based
adverse action reasons. I suspect most creditors will find that they have considerable
room for improvement.
CounselorLibrary.com, LLC, provides articles on its website written by attorneys with Hudson Cook, LLP, and by other authors, for information purposes only. CounselorLibrary.com, LLC, and Hudson Cook, LLP, do not warrant the accuracy
or completeness of the articles, and have no duty to correct or update information contained on the CounselorLibrary.com website. The views and opinions contained in the articles do not constitute the views and opinions of CounselorLibrary.com, LLC, or Hudson Cook, LLP. Such articles do not constitute legal advice from such authors or from Hudson Cook, LLP, or CounselorLibrary.com, LLC. For legal advice on a matter, one should seek the advice of legal counsel.