Technology
10/09/2023

AI-Driven Fraud Is Banking’s Next Great Risk

When it comes to AI fraud, banks are fighting fire with fire.

ChatGPT’s tsunami-like landfall in November 2022 made clear that generative artificial intelligence (AI) marked a profound advancement with broad applicability. Yet just as AI’s newest incarnation represents a major opportunity for businesses of all stripes, it’s also a potentially significant boost for bad actors — and banks, in particular, need to be on guard.  

Corporate consultancy McKinsey & Co. expects generative AI to add some $4 trillion to the global economy, including up to $340 billion annually in banking, mainly through increased productivity. With interest rates possibly remaining high for some time, bank profits and growth could be under pressure, which is sure to make AI-driven efficiencies all the more desirable. 

A few major players have made investments, but the vast majority of executives at U.S. banks remain cautious when it comes to adding advanced or generative AI, such as OpenAI’s ChatGPT, capabilities to their institutions. Generative AI involves training a large language model (LLM) on massive datasets to create new content. In contrast, machine learning focuses on learning from data to make predictions. Both use algorithms to perform complex tasks and have many potential uses in banking. 

Most of the use cases for generative AI are still years away for most banks. But directors and senior leaders still need to be concerned about how generative AI escalates the risk that hackers, scammers and fraudsters pose.

“Generative AI is turbocharging zero-day threats,” says Lee Wetherington, senior director of corporate strategy at technology and core bank provider Jack Henry. “There will be more of them, and they will be more novel in their form and impact. It’s a serious, serious problem.”

One worrying zero-day threat is jailbreaks, or prompt injections. These specialized prompts for platforms like ChatGPT are designed to manipulate the interface into making errors, disclosing sensitive information or executing harmful code. Normally, internal rules would prevent an LLM from responding to a prompt such as “tell me how to defraud a bank.” 

But hackers can overcome these guardrails by appending a long suffix of characters to a normally unacceptable prompt, confusing the model into divulging the information. Banks, with their limited coding capabilities, are unlikely to find and address all possible jailbreak prompts before scammers do. 

Fraudsters almost always get to new tech first, so they’ve been experimenting and developing new scams,” says Alex Johnson, author of the Fintech Takes newsletter. Generative AI, he adds, “is potentially a very powerful tool for fraudsters.”

In the past, fraudsters would trick people into giving up a password or account number and use that to steal funds or buy as much as they could — tactics that bankers have taken steps to combat. With generative AI, scammers can write an email that sounds just like the boss, a government official or a potential client. 

Today’s fraudsters play a longer, more complex game. They can use AI to create undetectable deepfakes, such as doctored videos, and combine it with a false name and legitimate personal information, such as a real, stolen Social Security number, to build convincing synthetic identities. 

A few years ago, creating a top-notch deepfake cost thousands of dollars. But in early 2023, a professor at the University of Pennsylvania’s Wharton School, Ethan Mollick, said he spent just $10 to produce a deepfake video of himself giving a lecture. Hackers may opt to shift away from ransomware attacks and toward synthetic fraud that is becoming increasingly affordable and more difficult to detect. 

Scammer groups released two new open-source AI tools in July alone, WormGPT and FraudGPT, both able to craft authentic-looking scam emails, according to multiple news reports. Ryan Schmiedl oversees fraud detection at JPMorgan Chase & Co. as the bank’s managing director and global head of payments trust and safety and told an industry publication that the company’s most troubling recent attacks have been via email. 

Jeffrey Brown, deputy assistant inspector general for the Social Security Administration, testified to Congress in May that “synthetic identity theft is one of the most difficult forms of fraud to catch because fraudsters build good credit over a period of time using a fake profile before making fraudulent charges and abandoning the identity.” 

Brown detailed how a hacker group schemed to defraud a San Antonio bank, creating some 700 synthetic identities to open new accounts and launch shell companies. These companies and identities applied for Covid-19 assistance, ultimately receiving as much as $25 million in relief funds. And this was before the release of GPT-level tools. 

“We believe that generative AI will prove to be an accelerator for risk,” says Larry Lerner, a partner at McKinsey’s Washington office, citing security, data privacy, and intellectual property protection as key concerns. 

In response, McKinsey and others recommend a combination of defenses for banks. Compiling databases of false and manipulated identities can help, as can a more rigorous screening process for new clients. But first and foremost is identity verification. 

The most advanced verification tool is the liveness test, which assesses video and images to identify the source of a biometric sample. Several firms are vying for market share, including Amsterdam-based VisionLabs and Intel Corp.’s FakeCatcher, which measures light absorbed by blood vessels to assess legitimacy. 

Many of these tools can detect deepfakes; the problem is that scammers are always tweaking and improving their attacks. That’s why experts urge banks to implement liveness checks in conjunction with voice cloning detection and document verification tools, paired with increased transparency and regular monitoring and testing. 

“Banks will need to take a very proactive stance on this, focusing on what it takes to earn ‘digital trust’ — not just meet minimum regulatory standards,” says McKinsey’s Lerner.

The irony is that generative AI may be the best detector of AI-driven fraud. The optimal solution could end up being an in-house LLM trained on financial data, rather than on massive public data, such as with ChatGPT. Banks already have reams of data that might be enough to train a potent LLM. 

In March, Bloomberg L.P. released the first LLM tailored to the finance industry, BloombergGPT. Its creators combined a massive proprietary financial dataset with an equally large set of general-purpose text to train a model that is significantly more accurate than ChatGPT on financial queries, according to Johns Hopkins University, where one of the creators, Mark Dredze, works as a professor. 

JPMorgan Chase & Co. filed a patent in May for what many expect to be another ChatGPT-like platform for finance, while global payments platform Swift is working with Alphabet’s Google and Microsoft Corp. to build its own LLM, according to an industry publication. Trained on Swift’s data store of 10 billion transactions, the new tool should sharply increase its ability to identify anomalies and fraud patterns. JPMorgan has already started using an LLM to detect fraudulent emails by looking for patterns.

The efficacy of your fraud fighting is a reflection of your data,” says Wetherington. But building and training an in-house LLM can cost $4 million or more, according to CNBC, which puts them beyond the reach of most banks. Some might buy an LLM off the shelf, which is likely to lack financial customization and thus provide limited protection. Some will seek alternative solutions, such as joining forces to pool data and build a shared LLM.  

Broadly speaking, the industry is just at the starting gate with generative AI. Coders still need to refine the next generation of AI tools, and regulators will need to address privacy and security issues. Yet banks need to be on high alert for AI-boosted fraud, constantly testing and improving their defenses. 

“Like any new product that comes into the market,” says Laura Spiekerman, president of identity risk firm Alloy, “it turns into a game of cat and mouse, where financial institutions are constantly having to respond to new tactics from fraudsters.”

WRITTEN BY

David Lepeska

David Lepeska is a freelance writer and foreign correspondent.