BankThink

AI is about to make synthetic fraud a much bigger problem

cybersecurity-class-bl-112615.jpg
The expanding availability and capability of artificial intelligence could lead to a surge of synthetic fraud.
Bloomberg News

"Randal Stevens."

"Who?"

"The silent silent partner. He's the guilty one, your honor — the man with the bank accounts."

"Well, who is he?"

"He's a phantom. An apparition. Second cousin to Harvey the Rabbit. I conjured him out of thin air."

Andy Dufresne, in addition to being falsely imprisoned for murder, was apparently quite a whiz at money laundering. Having earned the trust of the Shawshank prison warden and its guards, Dufresne found creative ways to channel the warden's stream of ill-gotten gains into the marketplace and back out again as legitimate investments, all the while having the paper trail lead to an imaginary person — a person whose identity he then assumes upon his escape from prison.

But synthetic identity fraud isn't just a neat trick Stephen King invented for double-crossing your prison warden — it's a growing problem in the world of cybersecurity and identity theft, and one that is only getting bigger now that deepfakes and other artificial intelligence-driven technology have become better and more widely available over the last six months or so.

Whereas traditional identity fraud is kind of a smash-and-grab operation — fraudsters take stolen information and try to use it to buy as much as they can before the identity's owner notices — synthetic fraud is a longer game. Using a combination of legitimate and fabricated personal information, fraudsters create a new persona — one who can't alert authorities that their identity has been stolen, because they don't exist. 

The cornerstone of a synthetic identity is a Social Security number, which is easy enough to steal. But the SSNs best suited to synthetic identity fraud belong to those least likely to notice some funny business on their credit report — most often children, the elderly or those serving prison time. 

As I said before, this isn't a new problem, though it has, to date, been less widespread than traditional identity theft. But the things that make synthetic fraud easier to detect — pictures or videos, for example — are becoming easier to bypass thanks to AI. Indeed, Russian hacking group Cl0p (or "clop") has been changing its tactics away from ransomware and other methods toward synthetic fraud precisely because this scam can be so hard to detect. That has serious implications for banks, but it also has serious implications for government agencies that disburse public benefits. One estimate suggests that a single synthetic identity can yield a hacker around $2 million worth of government benefits alone.

There are ways to fight synthetic identity fraud — some private companies compile their own databases of false or manipulated identities that banks can compare applicants against — and a rigorous screening of applicants can often shake out the false identities from the real ones. But there's a tradeoff in making applications too rigorous; if a legitimate applicant finds the process too onerous, they might just become some other bank's customer.  

There are technical solutions to these problems — for example, creating a unified process for checking SSNs against a master database to ensure that an applicant is who they say they are, thus stopping synthetic fraud before it starts. Turns out Congress thought of that back in 2018 when it passed S. 2155 — better known in these pages as the Crapo bill. That law directed the Social Security Administration to develop the electronic Consent Based Social Security Number Verification service, known as eCBSV.  

But fraud persists nonetheless. Jeffrey Brown, deputy assistant Inspector General for the Social Security Administration, testified in May that the agency had uncovered highly sophisticated fraud schemes that involve the creation of not only bank accounts and loans, but also elaborate networks of shell corporations that have ripped off the Paycheck Protection Program to the tune of $20 million to $25 million — and that was just one group of perpetrators.

Banks have been enthusiastic about finding ways that AI can improve their business, from implementing chatbots to streamlining the customer onboarding experience to speeding up underwriting decisions. But that technology cuts both ways, and banks need to be extra careful about knowing their customers. 

For reprint and licensing requests for this article, click here.
Politics and policy Fraud prevention Regulation and compliance
MORE FROM DIGITAL INSURANCE