Insurers' adoption of agentic AI is bringing more security concerns, complicating use of the technology to service claims, as Ofer Friedman, chief business development officer at AU10TIX, an identity verification service, explained.
Aside from

"It is making the credential resistant, but now there are more avenues to compromise it," he said. "If you are doing identity verification, in a year or two, you will have to provide some device-based factors that support the fact that you're not dealing with a jailbroken, copied or stolen device. Then you can actually steal the device, and go and use it somewhere else."
AU10TIX recently partnered with Microsoft to ensure that its
Pairing a government-issued photo ID with a live selfie had been sufficient authentication in the past, but the rise of
"Right now, AI plays way better to the hands of fraudsters than to the hands of defenders," he said. "Since AI needs big data, you need a lot of deepfakes, a lot of real ones. Problem number one, where do you get all of these, when IDs are PII [personally identifiable information], our personal data? There's a privacy issue there. So they go back to AI tools and generate faces, and you can generate -- I've seen, this week, a new website that generates passports in deepfake with whatever face you want."
Defenses against deepfakes are based on machine learning, but someone has to teach the defense system what is a deepfake and what is an authentic photo, Friedman explained. Although Microsoft Copilot was not created around serving insurance functions, identity is a large element in an insurance claim, and Copilot's AI capabilities were applicable, he added.
AU10TIX adapted Copilot to the insurance functions. "It's the same idea, just field tested or fire tested in real life situations with all their complexities," Friedman said. "Without identity, you cannot cash in or cash out. You cannot do anything with a policy."