Artificial intelligence is transforming financial fraud at an alarming pace, making scams more sophisticated and harder to detect. While fraud attempts have surged by 80% in the past three years, just 22% of firms have AI-powered defences in place. Stuart Wilkie, Head of Commercial Finance at Anglo Scottish Finance, explores the evolving threat landscape and how institutions — and individuals — can fight back.
Tackling financial fraud has become more difficult than ever in recent years, thanks to the increasing prevalence of AI (artificial intelligence). A recent report from Signicat has highlighted the prevalence of AI in the murky world of financial fraud, suggesting that AI now accounts for 42% of all financial fraud attempts – while just 22% of firms have AI defences in place. This disconnect is worrying, but sadly, it’s nothing new.
Both before and after the introduction of ChatGPT, the world’s most popular AI chatbot, in late 2022, the use of AI in financial fraud tactics has been on the increase. A 2022 report from Cifas found an 84% increase in the number of cases where AI was used to try and attack banks’ security systems.
AI has made it easier for grifters to carry out their fraudulent activity, which has in turn resulted in an increase in overall fraud incidence. Signicat’s report also uncovered that the volume of fraud attempts is increasing rapidly, with total fraud attempts up by 80% over the last three years. This is in part due to the role AI plays in making it easier to complete financial fraud schemes but is also attributable to external factors.
So, what are some of the most common forms of AI-fuelled financial fraud and how does one combat AI fraud at an individual and institutional level?
The majority of AI-aided financial fraud can be categorised as synthetic identity fraud. Under this scam, fraudsters use AI to create fake identities comprised of a combination of real and fake information, before signing up for loans, lines of credit or even applying for benefits.
AI’s ability to quickly identify patterns within large datasets has given fraudsters the ability to create realistic profiles that align with demographic trends. Generative AI is also used in the identity creation process, simulating a realistic credit history. These profiles are therefore near-impossible to distinguish from real people under standard verification checks.
A report from the U.S. Government Accountability Office (GAO) estimates that more than 80% of new account fraud can be attributed to synthetic identity fraud – indicating the vital importance of improving security measures.
The growing adoption of biometrics as a security measure has reduced our reliance on passwords. For many people, it’s made life easier – there’s less pressure to remember umpteen different passwords, knowing that your face or your fingerprint is enough to sign into your mobile banking or social media.
However, generative AI has made it easier for fraudsters to bypass these mechanisms through deepfaking (images, audio or video that are edited or generated with AI, depicting real or non-existent people).
When combined with other identifying factors – such as an individual’s national insurance number or first line of address – deepfakes are increasingly finding gaps in finance institutions’ security measures, giving fraudsters access to bank accounts and much more.
As well as helping scammers impersonate banking customers to gain access to their accounts, generative AI is also helping target customers by impersonating customer service representatives. In days gone by, spotting fraudulent text messages or emails was typically easier – they might have spelling mistakes or grammar issues, or be written in a tone of voice that was not aligned with your bank.
Now that scammers are using generative AI chatbots, however, generating an email that sounds exactly like your bank is far easier – they can match the corporate email tone with ease and will never make a spelling mistake.
This side of financial fraud extends far beyond just emails, too – there have also been a number of instances of scammers creating entire fake websites using AI-generated content and designing the pages to mimic that of a trustworthy bank.
Thankfully, just as fraudsters are using AI to commit fraud, banking and finance institutions are using machine learning to detect fraudulent activity – and getting progressively better at doing so. HSBC, for example, partnered with Google in 2021 to develop an AI system for detecting financial crime.
Their Dynamic Risk Assessment system is becoming increasingly accurate; initially, false positives were common, but these reduced by 60% between 2021 and 2024. The more accurate these systems become, the better chance we have of eliminating financial fraud altogether.
Generally, banks are doing a good job of shoring up their biometric systems against deepfaking – the more scammers they detect via their own machine-learning algorithms, the quicker they’ll be able to identify them.
It’s not just about combatting fraud at an institutional level, however. Part of ensuring that fraud doesn’t take place in the first place is about education – teaching banks’ customers to spot new and developing scams to avoid being caught out.
With AI and other technological advances changing the fraud landscape on an almost daily basis, however, this can be challenging. If individuals receive communications from their bank via email, phone call or any other method, they need to interrogate what they’re actually being asked to do. Most banks will never ask you for specific details, so people need to make sure they are clued up at all times.
Stuart Wilkie, Head of Commercial Finance at Anglo Scottish
Explainer: what is wholesale financing AI Opportunities Action Plan: what can we learn from financial services?
“AI versus finance: the battle against fraud escalates” was originally created and published by Leasing Life, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Kaitlin Rogers is a writer, editor, and news junkie. She has been working in the media industry for over five years, and her work has appeared in dozens of publications.
Kaitlin graduated from Michigan State University with a bachelor's degree in journalism and political science.