Detecting and combating insurance deepfake fraud

Three people use a screen installation demonstrating deepfake technology inside the Congress Center.
Attendees use a screen installation that demonstrates deepfake technology inside the Congress Center ahead of the World Economic Forum in Davos, Switzerland, on Monday, Jan. 20, 2020.
Photographer: Jason Alden/Bloomberg

The rise of artificial intelligence (AI) has transformed many aspects of our daily lives, from providing weather updates to curating custom music playlists. However, alongside its benefits arises a concern about potential misuse, particularly in areas such as insurance fraud. One alarming development in this regard is the spread of "deepfakes".

Deepfakes are AI-generated phony images, audio and videos that appear strikingly realistic. 

Advanced and unmatched realism

Edited and altered images and videos are not new. However, deepfakes distinguish themselves from conventional photo editing tools through their unmatched realism. While photo editing software is primarily utilized for enhancing or manipulating static images for visual appeal or to fix imperfections, resembling digital "cosmetic surgery" for photos, deepfakes operate at a higher level. They stem from advanced AI, where deep learning algorithms analyze vast datasets to learn and replicate human-like behaviors. This allows them to create dynamic and convincing audio, video and images, portraying individuals performing actions or speaking words they've never actually done, all with astonishing authenticity.

Deepfakes and insurance fraud

Deepfakes have emerged as a powerful tool for fraudsters seeking to exploit insurance carriers for financial gain. Utilizing AI tools, fraudsters can fabricate accidents and injuries, generating convincing visual and audio simulations of events that never occurred. For instance, they might create deepfake videos of car collisions or workplace accidents, complete with fabricated injuries and fake eyewitness accounts. Similarly, AI-powered software assists in seamlessly altering digital evidence, such as photographs complete with fake metadata, to submit fraudulent claims for alleged damage to vehicles, homes or other property. 

Moreover, fraudsters can forge medical records with the aid of deepfakes, producing detailed reports of fictitious treatments and procedures. These fabricated documents may include AI-generated medical bills, test results and physician notes, all aimed at extracting financial compensation from insurers. Impersonation of policyholders can also be facilitated by deepfakes, with fraudsters creating convincing audio recordings or video calls to submit false claims. 

Accurate accident details, crucial for insurance claims, are vulnerable to manipulation through AI-powered deepfake techniques. By being aware of these deceitful tactics, insurers can fortify their defenses against fraudulent activities, preserving the integrity and fairness of the insurance industry.

Combating AI-powered fraud

One way to stay aware and ahead of deepfakes is through thorough investigation. Detecting deepfakes requires more than a hasty examination of the evidence. 

A thorough investigation entails requesting genuine, original, certified documents directly from relevant parties, leaving no room for deception. Gathering additional visual evidence, including photographs and videos from eyewitnesses and nearby surveillance cameras, complements this comprehensive approach. By integrating traditional document review with visual evidence, investigators can effectively maintain the integrity of their investigations.

Another evolving way to combat the growing threat of deepfake-related fraud is with AI-powered algorithms that utilize advanced machine learning techniques to analyze vast amounts of data, detecting patterns and anomalies indicative of deepfake manipulation, essentially to use AI to detect AI. To assess authenticity, these algorithms can scrutinize various aspects of digital content, including facial expressions, voice inflections and environmental factors. AI can conduct comprehensive forensic analysis of digital media, identifying subtle discrepancies in pixel intensity, lighting conditions and audio waveforms, providing valuable insights into the authenticity of digital content.

Emerging AI regulations

With the increasing prevalence of AI, the number of legal cases involving AI-related matters is also on the rise. In a recent lawsuit, an author accused AI developers of unauthorized use of copyrighted material, highlighting concerns about protecting intellectual property rights in the realm of AI-generated content. Silverman v. OpenAI, Inc., 3:23-cv-03416, (N.D. Cal.) 

Another case involves allegations against a tech giant for recording customer service calls through AI technology without proper consent, raising broader questions about privacy violations and the transparency of AI-driven processes. Ambriz, et al. v. Google, LLC., 3:2023-cv-05437, (N.D. Cal.). These cases exemplify the legal complexities of integrating AI technology into various sectors and underscore the need for clear regulations to address issues surrounding intellectual property rights and privacy protection in the digital age.

Joining the action, lawmakers nationwide are taking proactive steps to try to regulate the integration of AI into various sectors and establish ethical standards for its use. The Department of Defense has adopted specific ethical principles for AI applications, while the legal field, exemplified by the Florida Bar, has begun introducing guidelines for lawyers on responsible AI utilization. 

Notably, Georgia House Bill 887, introduced in January 2024, seeks to limit physicians' reliance solely on AI outputs for clinical decisions, extending to areas like insurance coverage and public assistance. These developments underscore the growing importance of balancing AI benefits with ethical considerations and human oversight in decision-making processes, a trend of paramount significance across industries.

Conclusion

As AI advances, it brings both innovative opportunities and new challenges, particularly in the area of investigating insurance fraud. The rise of deepfakes poses a significant threat to the integrity of insurance claims, highlighting the importance of robust detection measures and thorough investigative procedures. By leveraging AI technology responsibly and ethically, insurers can navigate these complexities while upholding integrity and fairness in the insurance sector. 

Ultimately, proactive measures and vigilant oversight are essential in safeguarding against the risks associated with AI-driven fraud and legal issues.

For reprint and licensing requests for this article, click here.
Fraud Artificial intelligence Fraud detection Machine learning Law and legal issues
MORE FROM DIGITAL INSURANCE