12/01/2025

The Emerging Threat Posed by AI

Artificial intelligence enables criminals to scale up scams and fraud attacks. To fight back, banks will need a combination of high-tech and low-tech solutions.

Laura Alix
Director of Research

Human manipulation is at the heart of every fraud scheme, but the rapid adoption of artificial intelligence enables bad actors to take it to a scale never before seen. Is the banking industry prepared to deal with this fast-emerging threat? 

AI turns fraud into a numbers game; the perpetrator doesn’t need to succeed in every single attempt, but if they can con a consumer or business out of money in one or two instances out of 100, that can still be a significant payday. 

“Because of AI, [criminals] can replicate fraud schemes much quicker than the old school way of doing it, where a human had to do each one,” says Sarah Beth Felix, founder and president of Palmera Consulting.  

Scams — when a victim is manipulated into giving up their account information or sending money to a bad actor — are a type of fraud. Romance scams, investment scams and business email compromise are not even particularly novel forms of fraud on their own. But by using AI, fraudsters can more quickly amass data on their targets and reach more potential victims. And using generative AI, they can create realistic deepfake photos, videos or audio to trick people into forking over money. In one example of this, a finance worker in the Hong Kong office of a multinational firm was conned into sending $25 million to scammers who used a deepfake video to pose as the company’s chief financial officer, Hong Kong authorities said in 2024. 

Experts believe it’s a matter of time before fraud involving AI becomes a major threat to the banking industry, though currently it makes up a relatively small proportion of incidents. Just 15% of bank executives and directors who participated in Bank Director’s 2025 Risk Survey indicated that their bank or its customers had been directly impacted by fraud involving AI or deepfake media over the prior 18 months. The Deloitte Center for Financial Services estimated in 2024 that the use of generative AI could propel U.S. fraud losses to $40 billion as soon as 2027, compared with around $12 billion in 2023. 

“AI use is growing rapidly,” observes Steve Sanders, chief risk officer and chief information security officer with CSI. “Right now, the criminals are making better use of it than most security teams.”  

Scams using deepfake media can be especially pernicious because they’re often initiated outside the walls of a bank. Therefore, they don’t come to a banker’s attention until a customer has either already lost money or is determined to send money to a scammer who’s convinced them they are a grandchild in need of bail money or a long-distance lover. That’s why customer-facing bank employees need to be empowered to have difficult conversations with potential victims, Felix says. That may be simple and low tech, but it’s far from easy. 

“No one wants to have difficult conversations because they’re so worried about that customer being upset, they can’t do what they want so they take all their deposits and leave,” she says. “Taking a hard line and saying, ‘We will not process this for you’ is a tough prospect in a customer-facing role.”   

Plenty of tools on the market can help banks slow the tide of AI-enabled fraud, including account verification software, behavioral biometrics and device fingerprinting. Syed Raza, managing director with FTI Consulting, gives identity verification analysis as one example. That technology can quickly alert a bank in the event that an account is opened with an ID that’s been used to open hundreds of other accounts; that could indicate that the identity has been stolen or synthesized for the purpose of creating dummy accounts.    

But if banks are going to use AI to fight fraud perpetrated by AI, it’s critical that they have clean and consistent data to train those tools. They have to teach the technology what red flags it’s watching out for in the first place, Raza says. 

“You have to have a fraud program, update your fraud program, make sure all fraud risks and possible scenarios are identified, controls well designed and are in place, and procedures are well documented. Then you can automate whatever you want,” Raza says. “But if you fail that first part, then unfortunately no technology will be able to help.” 

Bank leaders, including directors, need to understand these tools before they make the investment in them, says Sean Goodwin, a principal in the DenSecure Group at Wolf & Co. They need to know what kind of security protocols are in place for any AI tools the bank may adopt and whether that data is ever repurposed for other clients. 

Additionally, bank employees may be inadvertently exposing the organization to risk with their own use of generative AI tools. Verizon’s 2025 Data Breach Investigation Report found that 15% of employees at organizations examined in the report were routinely using generative AI tools on corporate devices, which can put company data at risk.   

“Banks are trying to balance allowing their employees to use these tools that can bring a lot of efficiency gains without putting their data at risk,” Goodwin says. “My biggest concern around this is data leakage and data loss, people not understanding how to use these systems securely, where the information is going and how it’s protected.”  

Finally, the board needs to make sure the bank’s risk management framework reflects the variety of new threats that AI introduces into the fraud landscape, including customer vulnerabilities and data leaks from casual usage of generative AI tools. 

“The biggest misconception is that the AI-based fraud schemes are so sophisticated that banks cannot stop them, which is not true,” Raza says. “As sophisticated as AI scams are, the tools to stop them are equally — if not more — sophisticated.”

*This story has been updated to correct the name of Sean Goodwin, a principal in the DenSecure Group at Wolf & Co.

WRITTEN BY

Laura Alix

Director of Research

Laura Alix is the Director of Research at Bank Director, where she collaborates on strategic research for bank directors and senior executives, including Bank Director’s annual surveys. She also writes for BankDirector.com and edits online video content. Laura is particularly interested in workforce management and retention strategies, environmental, social and governance issues, and fraud. She has previously covered national and regional banks for American Banker and community banks and credit unions for Banker & Tradesman. Based in Boston, she has a bachelor’s degree from the University of Connecticut and a master’s degree from CUNY Brooklyn College.