Last month, the Department of the Treasury issued a troubling report warning banks they are at-risk of emerging AI fraud threats. The culprit is a failure to collaborate. The report warns that lenders are not sharing “fraud data with each other to the extent that would be needed to train anti-fraud AI models.”
This report should be a wake-up call. As any fraud-fighting veteran knows, combating fraud is a perpetual arms race. And when new technologies like generative AI emerge, the status quo is disrupted. Right now, the fraudsters are gaining the upper hand. According to a recent survey by the technology firm Sift, two-thirds of consumers have noticed an increase in fraud scams since November 2022, when generative AI tools hit the market.
How is AI changing the fraud landscape? According to the Treasury report, new AI technologies are “lowering the barrier to entry for attackers, increasing the sophistication and automation of attacks, and decreasing time-to-exploit.” These technologies “can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.”
The same generative AI technology that helps people create songs, draw pictures, and improve their software coding is now being used by fraudsters. For example, they can purchase an AI chatbot on the Dark Web, called Fraud GPT, to create phishing emails and phony landing pages. AI technology can help produce human-sounding text or images to support impersonation and generate realistic bank statements with plausible transactions.
Today’s banks and auto lenders need the help of fintechs. For example, our fraud consortium has identified fraudsters creating fraudulent bank statements that duplicate existing statements with minor variations in names and charges.
The Financial Services industry should heed the Treasury’s call for more data sharing. AI models, including defensive AI fraud tools, require data to power them. The Report notes “the quality and quantity of data used for training, testing, and refining an AI model, including those used for cybersecurity and fraud detection, directly impact its eventual precision and efficiency.”
Data sharing is critical because it allows lenders and subprime lenders to see emerging patterns as well as fraud threats outside their portfolio. With data sharing, once a fraudster commits fraud anywhere in the collective network, the whole network can defend against it – like a collective immune system. Fintech consortia are particularly important for both large and smaller organizations like credit unions, which lack visibility into the overall threat environment.
We applaud industry efforts like the American Bankers Association’s information exchange, which purportedly enables banks to share names and account information of suspected scammers. But these type of trade association-led initiatives are insufficient. Fraudsters operate at the speed of innovation, with massive data at their disposal, even changing methods of attack. Fraudsters are so dangerous because they operate without regulatory, privacy, or any other conventional business constraints. They focus on what works to achieve their fraud. It’s not in the DNA of any trade association to lead with the most innovative data sets nor the most sophisticated technology to stay one step ahead of these fraudsters and their networks.
In contrast, this is exactly where fintechs can help. It took companies like FICO and its Falcon software to corral transaction fraud. I worked at ID Analytics in the early 2000s when we created a consortium of identity information with major lenders to turn the tide on identity fraud. Just as with other fraud-fighting innovations, the solutions for AI-generated fraud will most likely come from industry leaders and fintechs especially equipped with the most advanced technology and sophisticated data sets built on machine learning algorithms.
Today, fintechs are building cutting-edge fraud consortia. Millions of loan applications pass through these systems from the nation’s largest lenders, auto finance companies, credit unions, and Fintechs. They collect and analyze data in real-time at the speed of the transaction, to help lenders stop a loan before it is funded. And these providers use the same data to defend lenders – anomaly detection, knowledge graphs, historical knowledge of fraud – that AI fraudsters use to commit the fraud.
Yes, auto lenders should be worried about the new AI fraud threats. The problems are real and cannot be solved by any one company in a silo. However, we should all take some heart in knowing that the next generation of tech innovators are working right now to thwart the latest fraud threats.