Darren Welch, a partner in Skadden’s Consumer Financial Services Group, represents and advises a broad range of companies and individuals in regulatory investigations, enforcement proceedings and examinations, as well as in civil litigation on all types of consumer financial services issues.
Evolving Compliance Risks for Artificial Intelligence
Bank directors should understand the developing consumer compliance risks associated with AI.
Brought to you by Skadden

With advances in technology, banks have both tremendous opportunities and face unprecedented competitive pressure to use artificial intelligence (AI). Not only can AI be used for modeling and decision-making tools, but increasingly it is used to generate new content — referred to as generative AI — for use by bank customers and employees.
But with these new opportunities come significant compliance risks as well. Rather than issuing new rules governing AI, regulators have thus far largely applied existing laws and regulations, often written decades ago, to new technologies, with limited guidance.
But given the lack of comprehensive regulatory guidance on AI, there is significant uncertainty in the current consumer financial regulatory environment. This has been heightened due to varying approaches and priorities across agencies and with the new Trump administration.
Key compliance risks for banks relating to the use of AI include:
- Fair lending. Regulators have repeatedly expressed concerns that using AI in connection with credit underwriting decisions may result in discrimination on the basis of race, ethnicity or other prohibited factors. Banks can mitigate these risks by adopting robust fair lending testing protocols assessing potential disparate impact and other fair lending risk.
- To assess and mitigate risk, banks need to understand how AI tools work and be able to explain how they arrive at decisions affecting applicants and customers. Additionally, when models using AI result in adverse action on an application for credit or existing account, banks must inform consumers of the specific factors in the model that resulted in that adverse action.
- Customer service. Increasingly, banks are relying on chatbots or other AI-based tools to assist with customer service. If these tools result in inaccurate information being provided to customers, banks are at risk of violating prohibitions against unfair, deceptive and abusive acts and practices (UDAAP). Also, if the use of AI in chatbots or otherwise creates obstacles for consumers to obtain information about their accounts, banks face increased risk under laws providing consumers with rights to access information about their accounts, including section 1034(c) of the Consumer Financial Protection Act.
- Credit reports. As with the use of credit reports and credit scores generally, when institutions use AI tools driven by information in credit reports and other reports on consumers for underwriting loan applications or employment decisions, they must comply with the Fair Credit Reporting Act (FCRA). The FCRA imposes requirements relating to customer notifications and permission and limiting use of the data to certain permissible purposes.
- Third-party oversight. When vendors provide AI products and services, banks must ensure appropriate oversight of those relationships. If a bank has limited visibility into how AI tools provided by vendors work, they face increased compliance risk.
The legal and regulatory landscape applicable to the use of AI continues to evolve, with further changes expected under the new administration. On President Donald Trump’s first day in office, he rescinded former President Joe Biden’s executive order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence and three days later, issued an executive order titled Removing Barriers to American Leadership in Artificial Intelligence. That new executive order articulates a policy to “sustain and enhance America’s global AI dominance,” directs the development of an AI action plan to implement that policy and calls for the review of federal agencies to identify practices inconsistent with that policy. At this early stage, it remains to be seen how actions by the Trump administration may affect federal regulation of AI in the banking industry specifically.
At the state level, lawmakers have recently enacted or proposed new, burdensome AI-related requirements applicable to financial services and other industries. In May 2024, Colorado enacted the Artificial Intelligence Act, focusing on algorithmic discrimination in high-risk AI systems. The law requires deployers of high-risk AI systems to implement policies and procedures to mitigate those risks, notify state regulators of identified discrimination, notify consumers of the use of AI and give consumers the ability to opt out of AI-based decision-making in certain circumstances. Similar bills were introduced or proposed in a number of other states, including Texas, Illinois and Vermont. Continued efforts at the state level can be expected to increase this year, as near-term prospects for a comprehensive federal bill imposing AI-specific consumer protections seem remote.
Bank directors would be well advised to understand the consumer compliance risks associated with AI, ensure that their institutions have implemented appropriate risk management oversight and AI-governance structures and stay informed of developments in this evolving area.