InsureThink

How AI deepfakes are rewriting the rules of insurance cybersecurity

The definition of deepfakes with the work deepfakes highlighted in pink.
Adobe Stock.

Imagine this scenario. An agent representing your insurance carrier receives a video from a long-time policyholder showing water damage to their home. Your team recognizes his name, face and mannerisms. Everything seems legitimate, and because it's a relatively small claim from a trusted policyholder, the carrier approves a $35,000 payment to the policyholder.

The only problem is that the video wasn't real. It was a deepfake generated by a malicious actor using generative AI. The claim was fraudulent, and it flew completely under the radar.

This hypothetical situation is becoming all too real. Deepfakes now cost $12 billion in fraud losses globally and could reach $40 billion in two years, according to Deloitte. Even worse, these new attack lanes blow the doors off traditional approaches to fraud detection, claim verification and cyber risk. To combat the growing threat, carriers must act swiftly.

Why AI deepfakes work on us

Human psychology is the vulnerability AI deepfakes actively exploit. People naturally trust familiar faces, including their business colleagues and associates. Cybercriminals use this to their advantage, and they've become so good at mimicking the look and sound of real-world people with AI that they're even fooling world leaders.

Right before the July 4 holiday, a bad actor used AI to impersonate U.S. Secretary of State Marco Rubio, sending spoofing messages by text, Signal, and voicemail to U.S. and foreign officials. Two months earlier, a similar AI deepfake scam involved Susie Wiles, President Donald Trump's chief of staff.

AI deepfakes don't only target well-known people. They impact the private sector, too, and insurance isn't immune. Gartner estimates that 28% of organizations have already received a deepfake audio attack, while another 21% have experienced a deepfake video attack. Only 5% of those deepfakes have resulted in the theft of money or intellectual property, but just one incident can be costly. Just ask the UK engineering firm that lost $25 million to a deepfake scam in May.

The real risks for carriers

AI-powered impersonations open new doors for claims fraud, identity theft, and even impersonation of policyholders, agents or executives. This harsh reality impacts multiple parts of the insurance value chain.

Start with the claims process, where deception already runs rampant. Deloitte estimates that nearly 10% of all P&C claims are fraudulent, costing $122 billion annually. Carriers rely on photo and video evidence to process property, auto and health claims, and fake photos of damage from an automobile accident are common. If AI deepfakes can convincingly mimic damage, injuries or even voice reports, carriers may end up spending even more on fraud than they already do.

Underwriting is impacted as well. Carriers underwriting cyber risk will need to reevaluate their insureds' risk profile to evaluate how much of a financial threat deepfakes could pose to their business.

Perhaps most importantly, customer trust and brand reputation are at stake, too. Consider a situation where a policyholder is duped into providing financial information to someone who looks and sounds like they represent your company. Odds are, you will have lost that customer for life, even if you can prove your company wasn't at fault.

Meeting the threat head-on

Because deepfakes can replicate a person's tone, facial expressions and speech patterns with alarming accuracy, traditional voice and facial recognition tools are quickly becoming obsolete. Carriers and the businesses they represent need updated verification protocols and monitoring solutions. A few must-haves:

Uniquely human authentication programs. Old-school challenge-response authentication—such as asking knowledge-based questions ("what was the name of your first pet?") can be a deceptively simple defense as a second form of identity verification. This only works if your pet's name is not posted on Facebook and LinkedIn, or other public places. These questions and codewords act as a human firewall, separating an impersonator from the real-life person they're trying to mimic.

AI-powered fraud detection tools. Carriers need to strengthen their fraud detection capabilities by implementing AI-enabled tools that can spot manipulated audio, videos and images. Three top choices:

·       Behavioral biometric tools can analyze user habits, such as typing speed or mouse usage, which are difficult for deepfakes to mimic.
·       Liveness detection tools can check for subtle cues, such as blinking patterns or responses to specific prompts, that AI can't yet replicate.
·       Social media monitoring tools can detect fraudulent profiles (such as a phony LinkedIn bio). They can also identify when a company's or person's name or likeness is being misused.

Education for customers and brokers. Carriers should make their customers aware of the risks AI deepfakes pose and educate them on how to identify legitimate communications. Additionally, carriers should establish regular communications about AI deepfakes with staff. These crucial steps will strengthen your organization's cyber resilience, build trust with your clients, and give your underwriting team information they need to write risks accurately.

Time is of the essence

With AI advancing so rapidly, carriers do not have the luxury of time when it comes to stopping deepfakes. Immediate action is needed. Early adopters will protect their employees, customers, and brand, while laggards will quite literally pay the price.

For reprint and licensing requests for this article, click here.
Artificial intelligence Fraud Cyber security Insurtech
MORE FROM DIGITAL INSURANCE