On July 7, 2020, the Consumer Financial Protection Bureau (CFPB or Bureau) published a blog post on the use of artificial intelligence (AI), especially machine learning (ML), in credit underwriting. The blog post addresses industry concerns about how AI and ML models interact with the existing regulatory framework, specifically the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA).
AI and ML have the potential to expand credit access to millions of consumers. Using AI models, lenders can evaluate information about applicants beyond what is captured by FICO scores, and other credit scores, in new and dynamic ways. This could result in more efficient credit decisions. Lenders would be able to identify consumers that are classified as risky because of thin credit files or lack of credit history, but who do not share the risk traits of other borrowers with low credit scores. These consumers could be offered credit at a better price than they would receive otherwise, and which would reflect their true level of risk. However, the Bureau notes that AI is not without risks, including the risks of unlawful discrimination, lack of transparency, and privacy concerns.
Industry uncertainty over how AI and ML fit into the existing regulatory framework, including in the context of adverse action notice requirements under the ECOA and the FCRA, has slowed the adoption of AI in credit underwriting. The adverse action notice requirements under the ECOA and the FCRA generally require creditors to provide specific reasons or disclose the information relied upon in taking an adverse action. The ECOA requires creditors to notify applicants of when an adverse action is taken and to provide a statement of specific reasons for the action taken. The FCRA requires adverse action notices under several circumstances, including if the adverse action is taken based in whole or in part on information in a consumer report.
There has been uncertainty regarding how creditors can issue adverse action notices that are compliant with the ECOA and the FCRA when the creditor relied on AI models with non-intuitive relationships in making the adverse action decision. However, the Bureau explained that the existing regulatory framework already has the necessary flexibility to accommodate the use of AI and ML in credit underwriting. For example, an Official Interpretation to Regulation B states that creditors do not need to describe how or why a disclosed factor adversely affected an application or relates to creditworthiness. Under this Official Interpretation, a creditor may disclose the reasons for an adverse action, without being required to explain how that reason would necessarily affect the applicant’s creditworthiness. This latitude in the existing regulatory framework would allow creditors to utilize increasingly complex AI models in credit underwriting without the concern that the creditor must explain to those against whom it has taken adverse action how the AI model reached its specific conclusion.
The blog post illustrates the Bureau’s continuing efforts to address the impact new and evolving technologies have on consumer finance, but also underscores the Bureau’s desire to utilize new regulatory tools to mitigate regulatory uncertainty and promote innovation. The blog post specifically recommends that stakeholders use the Bureau’s newest regulatory tools—the Policy to Encourage Trial Disclosure Programs (TDP Policy), revised No-Action Letter Policy (NAL Policy), and the Compliance Assistance Sandbox Policy (CAS Policy)—to address areas of regulatory uncertainty, including in the area of AI and adverse action notices under the ECOA and the FCRA. The Bureau invites stakeholders to use these and the Bureau’s other tools to explore three specific areas related to the use of AI in credit underwriting: (1) the methodologies for determining the principal reasons for an adverse action under the ECOA or the FCRA; (2) the accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models; and (3) how to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers. The Bureau encourages financial institutions and other stakeholders to assist the Bureau in understanding the market and the impact of its regulations in connection with the use of AI.
Brian Fritzsche has contributed to the drafting of this client alert.