Predictable tasks are made for today's generative AI

A person with blue nails uses a computer mouse and keyboard
David Paul Morris/Bloomberg

Predictions of artificial intelligence spreading to nearly every industry are all the rage, and the insurance industry is no exception. 

From taking over mundane office tasks to replacing call centers, and even streamlining the claims process, AI tools stand to make major inroads in insurance in the near future. 

AI doesn't fatigue, which means shifting some of the monotonous and time-consuming tasks to AI is one of the areas that could most benefit insurers — presuming AI hallucinations can be kept in check. This could be things from quoting policies to issuing certificates of insurance. Predictable, routine tasks are ready made for today's generative AI. 

And since it doesn't fatigue, AI tools could easily sift through mountains of claims data, looking for signals of fraud that might slip past a human eye. 

The worry about the hallucinations is real, though. In one high profile case, a major airline was relying on a generative AI chatbot for its customer service on its website, and that chat bot invented a refund policy that didn't exist. The airline was then held legally liable for honoring that refund policy for that customer. 

That cautionary tale shows that at least for now, there still needs to be quite a bit of human oversight into AI automated work. 

Moving out of the mundane work, AI tools could also revolutionize the underwriting process — as long as it can keep its bias in check. 

With AI's ability to digest massive amounts of data, underwriters could move into whole new worlds of datasets to assess risk. This could be customer reviews on third-party sites, public records from countless jurisdictions, even social media comments. 

The danger of bias comes in with AI's tendency to look for patterns and use those to base their future decisions. If there was an existing inequity — such as one area of town not getting sufficient coverage, then left to its own devices, the AI underwriter might double down on that inequity and infer that it was right to exclude that population and continue to do it moving forward. 

In a worst-case scenario, the AI could see high rates that were given a racial minority or to a specific gender and assume that is how it should be. The problem is that an AI bias in policies might not be immediately obvious because AI processes are done in a black box. Not even the programmer knows why AI makes certain decisions. 

But at the same time, insurers are legally obligated to avoid systemic discrimination, and observers worry that moving into underwriting will invite more regulatory oversight, much like Colorado is doing 

Text-based generative AI isn't the only place insurers are likely to take advantage of AI. New tools that integrate AI image recognition could soon be used to analyze aerial photographs and/or satellite imagery. This data could be used for everything from risk mitigation — think warning a commercial customer its roof is pooling water — to underwriting, and post-disaster claims processing.  

Those text based generative AI tools could then jump in and help write coverage opinions based on contents of individual policy language.  

From a customer's standpoint, AI tools could integrate into the suite of on-board software integrated in today's vehicles. 

Using online traffic data, a real-time usage-based insurance policy could become a possibility. After the driver punches in the destination, the AI could quote one route as more risky, so slightly more expensive, while it could offer a discount for the safer route. Drivers could see those options right in the vehicle's infotainment screen. 

Elements of vehicle technology could also work with AI tools to help speed up the claims process after an accident by integrating with vehicle sensors and uploaded images for near instantaneous damage estimates. The AI could also interface with market data to coordinate repairs. And an AI driven autonomous vehicles could even deliver a replacement vehicle right to the customer's driveway while the damaged vehicle drives itself to the best repair shop.  

There is a question of liability around those driverless vehicles, though, and that is the question of who is liable if there is a wreck. Liability hinges on fault, so who could be at fault if there is a wreck in a totally autonomous vehicle? Would it be the owner of the vehicle? Or the manufacturer of the vehicle? Or the software programmer? Those questions have not been hammered out in court, but it's safe to say that if anyone but the vehicle owner is ultimately found to be at fault, then new clauses are going to have to be written into commercial liability policies, commercial auto policies, as well as professional liability, product liability, and errors and omissions policies.  

AI tools could end up causing business liability beyond the self-driving vehicle. Take copyright violations and deep fakes for example. If an AI creates a piece of content that opens the business up for legal liability, it will likely fall on the insurer to settle the case. The insurance industry is going to have to come up with standards for where that fault should ultimately lie. 

But regardless of whether it is going to replace outsourced call center work, or get under the hood of underwriting, AI tools are going to continue to bleed into insurance for years to come. 

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Racial bias Unleashing the Power of AI Underwriting
MORE FROM DIGITAL INSURANCE