*This article was published in Bank Director magazine’s first quarter 2024 issue.
In the year since ChatGPT was released, many have hailed generative AI (GenAI) as a paradigm-shifting technology that will transform banking. But others have characterized it as a Trojan horse that will expose financial institutions to risk. Both positions are partly true.
GenAI does have the potential to radically transform the banking industry for the better. At the same time, there are security and compliance issues that financial institutions need to be aware of, particularly since government regulations are still evolving.
By applying certain guidelines and automated monitoring, bankers who have been cautious about GenAI can adopt new tools into their ecosystem — and leave the Trojan horse at the door. I recently asked my colleagues in security, legal and compliance about key issues banks should consider before adopting AI tools.
Intellectual Property Ownership
Large language models (LLMs), like the one used by ChatGPT, are trained on huge amounts of data and are constantly learning based on inputs from users.
As enterprises, we must retain ownership of our intellectual property and protect confidential information when employees use these publicly available large language models.
When evaluating a GenAI tool, it’s important to ensure that any data entered into a prompt will not be used to train a public LLM. In many cases for enterprise tools, paying a license fee can allow for the ability to opt out of the data being used to train the model. Banks should look at each tool on an individual basis in terms of the desired achievement and review the contractual terms carefully. Periodically, ensure the vendor can demonstrate that corporate data is not retained in the generative AI solution.
It’s also vitally important that bank leadership understands how their teams are using GenAI tools, trains employees on what information they can and can’t disclose when using each tool, and ensures its security team has automated tools and capabilities to monitor information entered into the prompts.
Responsibly Adopting New Tools
Because regulations in this space are evolving, banks need to be aware of their requirements and, in turn, manage expectations for any third-party vendors with whom they partner. To fully understand the changing regulatory and security environments that apply to the use of AI, financial institutions must have ongoing conversations with their regulators.
One area that regulators care about is instances where AI will make decisions that affect customers, specifically around underwriting and locking down accounts if the technology or system suspects fraud. We fully expect that in uses where there are or could be negative outcomes for individuals, regulators will issue both rules and policy.
In this context, bankers need to ask themselves what risk-mitigating strategies and levels of automation are acceptable.
When choosing AI tools, prioritize ethical solutions that protect both the customers’ and institutions’ confidentiality and privacy. Look for third parties that share the institution’s principles of ethics, data security and privacy.
Trusting the Output of LLMs
There is an element of randomness, variability and even falsehood in the output LLMs produce — known as hallucinations — yet they deliver all answers with the same level of confidence.
AI and new language models must be connected to a knowledge model that has a robust set of data, with specifications that can instruct the technology on how to operate within a specific industry. For example, a knowledge model specific to the financial services sector can provide a deep, data-driven understanding of bank policies, procedures and workflows while appreciating regulatory and compliance