Chris Marinac
Director of Research

I am old enough to remember the launch of corporate email for internal correspondence and, soon after, external client conversations. I had to transition manual bank earnings, financial statements and daily stock prices from my green accounting ledger pad into Microsoft Excel. Looking back, there was exponential change in daily workflow, which boosted my skills and my responsibilities. The same could be said about today’s artificial intelligence (AI) programs and how we analyze and learn about the U.S. banking industry.

There’s been much discussion about how the latest iteration of AI, generative AI, might be utilized in the banking industry and reshape the overall labor market. But right now, I’m skeptical of the impact that this technology will have, given the issues that still exist with it.

Answering Questions
For instance, asking ChatGPT and Perplexity AI, two popular generative AI engines, mundane questions about the banking industry — such as “What are the trends in U.S. bank deposits?” — reveal the shortcomings of the technology.

For this deposit question, for instance, the ChatGPT response said that banks had explosive deposit growth during Covid-19, then encountered a temporary setback in 2023 when Silicon Valley Bank failed and resumed growth in 2024. Perplexity was able to dig deeper and say that brokered certificates of deposit (CDs) had declined since 2023, and core funding was stable and should rise 3% to 5% in 2025. These responses show the lack of critical thinking that comes with generative AI.

I was disappointed to see that asking the AI platforms about basic banking topics led to answers that only repeated the most recent news story or gave a recap of quarterly earnings and operating ratios from the banking regulators. I didn’t learn anything truly insightful or new and the responses lacked true analysis.

Remember, I have been a publishing research analyst since 1992, and the recent deposit history is not lost on me since I lived it day-to-day for 132 earnings seasons. The brokered CD detail helps, and just barely as an anecdote and not any earth-shattering trend.

Digging Deeper
Users can ask sharper AI queries that touch upon important modeling questions, such as, “What is the necessary credit mark for a bank merger?” But again, when I tried this, the results were problematic.

For this question, both AI engines gave me a bogus answer of 4% to 5% credit marks on M&A deals and linked back to a 2018 article from none other than Bank Director magazine. As every reader knows, the banking world has changed significantly since 2018, and the correct answer is 1.5% to 2.5% for 2025 transactions. Perhaps these AI engines will pull from this piece when it is published, and I will get credit.

Inaccurate Information
Further, I asked ChatGPT to write a research report on Truist Financial. The results came back stating Truist would see 3% to 3.5% revenue growth in 2025. I’m sorry to report that the company lowered its guidance to 1.5% revenue growth two weeks before I asked ChatGPT this question, meaning that ChatGPT was pulling from old information. A human gathering would not make the mistake of using quarter-old data.

I found plenty of other instances where ChatGPT and Perplexity pulled answers from articles and social media posts, at times recycling the same incorrect stories. An example of this was how the platforms described the industry’s liquidity problems as widespread in spring 2023 when three regional banks failed. As a grassroots analyst, I observed firsthand that the only banks struggling with liquidity were the unique institutions that failed. Other banks never had a liquidity issue. This demonstrates the unreliability of this technology and underscores the importance of accurate data and information being fed into these platforms.

It is my opinion and direct experience that AI applications have a long way to go before anyone in the banking sector should rely upon this technology to find information and generate research reports. The lack of details and analysis, let alone distinguishing facts from narratives, renders the AI answers behind the curve.

That serves as a warning to banks and board members interested in implementing this technology within their organizations. It proves that often a human touch is still required. Anyone should be wary of relying on a quick answer about banking generated by one of these AI platforms and instead should dig deeper for the analysis they need.

WRITTEN BY

Chris Marinac

Director of Research

As Director of Research at Janney Montgomery Scott, Chris Marinac oversees the firm’s Equity Research team, which covers more than 225 companies within the Financials, Healthcare, Infrastructure, and Real Estate sectors. The team aims to provide first class research on companies and the industry at large—which means staying ahead of the curve, understanding investors, and considering how events today will affect the future. Chris has more than 27 years of financial services and research analysis experience. Prior to joining Janney in 2019, he was Co-Founder and Director of Research at FIG Partners LLC, a premier investment banking and research firm specializing in community banks. At FIG, he established and managed an award-winning Equity Research team that covered more than 150 banks, thrifts, and REITs. Earlier in his career, he spent six years as Managing Director at SunTrust Robinson Humphrey and five years as a Research Analyst at Wachovia Corporation (formerly Interstate/Johnson Lane Inc.).