How to avoid mistake-ridden AI output

Takeaways:

Processing Content
  • Incorrect AI results break consumer trust
  • AI needs enough data for a complete view of claims and underwriting
  • Human review is essential

Using AI can amount to a "garbage in, garbage out" scenario if you're not careful. This is especially problematic for insurers, which rely on accurate information gleaned from clean data.

"If you're not providing the real picture, AI will give you the output, but it'll be the wrong output," said Nemo Dighe, associate director for business intelligence at Group 1001, a life insurance and annuity company. "That's how people lose trust with these tools really fast. You're building something incredible, but something that cannot be used, not trusted, not adopted — that's not a win."

Insurers considering whether to allow AI to make claims and underwriting decisions should first determine if the data they feed the AI is complete and correct, according to Dighe, who spoke in a recent webcast.

"Is the data actually ready? Is the data really clean? Is the data not biased? If there's any biases, the AI output is going to be biased," she said. "Think about, do you have the data in one storage space? Is it on one database that you can connect AI to, or is it migrated over 10 different legacy systems? Does AI have the full clear picture with the data you are providing?"

More insurers now rely on AI to develop processes for data, Dighe added. 

"AI initially speeds up the development. When I say development, I am going all the way back to building data sets, building data pipelines," she said. "Folks are relying more and more on AI for their development now. Since they have free time, now they can actually go ahead and research. It gives you an opportunity to cross train yourself, to go figure out what's happening."

Still, operational decisions can fall short if confidence and trust is lacking, as Kate Dombrowski, vice president of claims litigation at Selective, explained in the webcast. 

A lack of confidence in AI-generated decisions can be caused by "incomplete information, poor data quality, lack of expertise of the end user using the tool, or lack of expertise in the decision they're trying to make, or lack of capacity," she said. "It can even come from challenging organizational culture or accountability."

While AI tools can help frame the issues a claims adjuster has or add value to policyholders' experiences when filing claims, technology leaders have to figure out how much information is enough to trust the decision the AI has made, Dombrowski explained.

On the back end, once AI tools generate a decision or an action to take, human review is a must, she said. "There is no substitute for the expertise that the more experienced practitioners bring to the review of that output to ensure that it is actually reliable," she said.


For reprint and licensing requests for this article, click here.
Artificial intelligence Insurtech Life insurance Property and casualty insurance
MORE FROM DIGITAL INSURANCE
Load More