John Meyer
Managing Director

In Jack Finney’s 1954 novel “The Body Snatchers,” a California town is invaded by plant-like pods that have drifted to Earth from space. These pods replace sleeping people with perfect physical duplicates who have the same knowledge as humans, but they are incapable of human emotion or feeling. The human victims disappear forever.

Now, we’re in a reality where artificial intelligence (AI) workers, or AI agents, are replacing full-time equivalent roles at companies. It’s not much of a leap to think of the AI agents as the new job-snatching pod people, and there are big implications for bank executives.

Recently, the Bank of New York Mellon announced that it has employed dozens of AI-digital agents that report to human managers. Their current work? Coding and payment instruction validation and, according to a late June report in The Wall Street Journal, they will soon have their own email accounts.

With email, it’s only a matter of time before these AI agents take over customer communication and interactions. The larger question the industry must ask is, “When will the digital workers replace human workers?” When that comes to pass, it will fulfill the words of Jamie Dimon, CEO of JPMorgan Chase & Co., who advised his employees to view artificial intelligence-driven job elimination as beneficial. He told workers that “attrition is your friend.”

As this shift accelerates, bank executives must proactively decide how much decision-making power to delegate to digital workers and whether their data infrastructure is ready for these AI agents to safely interact with customers in an empathic manner.

Supervising the Bots: Daily, Not Annually
In my AI presentations, I often share that AI has no conscience; therefore, bank executives need to validate that the key attributes the technology uses to make decisions don’t violate fair lending, unfair, deceptive, abusive acts or practices, or Community Reinvestment Act guidelines. Regulations in banking means having controls in place to monitor AI inputs and outputs, so how will the human managers track the AI workers’ decisions and how often?

According to Andy Winskill, founder of Agents in the Boardroom, AI managers must hold regular alignment reviews. “Managers must consistently sample AI outputs, detect any divergence from intended performance, and promptly recalibrate or retrain the model,” he told American Banker. “Unlike quarterly human performance reviews, AI reviews are continuous, requiring near-daily monitoring and adjustment.”

It’s hard enough to get leaders to conduct annual reviews on time. Moreover, conducting these reviews means that the bank employs the talent who can review the data outputs for bias, and that talent must not only exist in business units using the AI, but also that these employees need to work on internal audit and third-party reviews.

While tackling the regulatory decision-making concerns, bankers should ask themselves whether their data is ready for AI to interact with their customers. At most banks, quantitative data remains siloed with key information in the core system, commercial lending system, consumer lending system, digital banking system and payment systems. Bankers need to verify what records will be the sources of truth for their AI models and whether these sources of truth are accurate. Quantitative data, however, does not address one of your bank’s key differentiators — the emotional intelligence side of customer relationships. That side will require collecting more sources of data to train the bots on empathy.

Ron Shevlin, chief research officer at Cornerstone Advisors, writes that “leveraging qualitative data like customer feedback, call center transcripts, chatbot logs, survey responses, employee notes, policies, and procedures is a goldmine for training large language models” that will prepare the AI pod workers. These interactions, not transactions, allow the pod workers to learn your bank’s focus on customer satisfaction. As opposed to the pod people in “The Body Snatchers,” community bankers want some empathy for their customers.

To get qualitative data ready for AI agents, bankers should develop a policy and practice for standardizing the collection process. Bankers should create templates or structured forms for capturing customer comments and service interactions and implement consistent tagging frameworks across customer feedback channels for where the AI agents will interact. Next, bankers must run sentiment analysis on the data to help the AI workers de-escalate frustrated customer calls, chats and emails.

Sound like a lot? It’s required. In a world where most bankers view their personal relationships as their key value proposition, employing AI agents threaten to disrupt this differentiator unless this technology learns empathy and emotional intelligence. This effort requires frequent model validation to ensure against bias, standardization of qualitative data and sentiment analysis before the AI agents can represent your bank.

WRITTEN BY

John Meyer

Managing Director

As a managing director with Cornerstone Advisors, John Meyer leads the firm’s Business Intelligence and Data Analytics practice. In this role, he helps community banks and credit unions better use the data they have to make smarter decisions with risks and opportunities