Insurers are mixing in their own homegrown AI-based detection tools with the third-party ones they use, according to data from the Verisk State of Insurance Fraud study.
Half of insurers are using internally developed AI tools, and 65% are using tools from a vendor or other third party, Versik says.
The heightened use of AI is a response to a rise in manipulated media, with almost all, 98%, agreeing AI‑powered editing
"AI editing tools are changing how people interact with digital content, and insurance is feeling that shift in real time," Shane Riedman, president of anti‑fraud analytics at Verisk, said in a press release. "Our concern is that many consumers don't see small edits as crossing a line, but when those changes make their way into claims, they can materially affect outcomes. As manipulated media becomes more common, many insurers face growing pressure to establish clearer boundaries, improve visibility and prevent fraud — while preserving a fair and efficient claims experience for policyholders."
Fifty-eight percent of insurance companies reported being very confident in their ability to detect edits made to real photos and 32% say they are very confident they could
Fifty-three percent of insurers said they believe half of policyholders who alter claim photos or documents don't realize the edits may qualify as fraud.
Younger consumers are more likely to dismiss the ethical considerations of making such edits; 55% of Generation Z respondents said they would consider strengthening a claim by making a digital edit, compared to 49% of Millennials, 28% of Generation X and 12% of Baby Boomers.
"Insurers aren't standing still, but the threat is evolving faster than many systems were built to handle," Riedman said in the release. "Detection tools that aren't fully integrated into claims workflows can create blind spots. As deepfakes and other AI‑driven manipulation become more common, the carriers will need more connected systems and shared intelligence to keep pace."







