Why ChatGPT is likely not good enough for insurers

The ChatGPT chat screen on a laptop computer arranged in Germantown, New York, US, on Friday March 10, 2023. ChatGPT has made writing computer code and cheating on homework easier. Soon, it could make email scams a cinch. That's the warning from Darktrace Plc, the British cybersecurity firm. Photographer: Gabby Jones/Bloomberg
The ChatGPT chat screen on a laptop computer arranged in Germantown, New York on March 10, 2023.
Photographer: Gabby Jones/Bloomberg

Based on the breathless assessment of some commentators, ChatGPT is poised to replace contact centers in every sphere of business – including the armies of agents currently employed in the insurance industry worldwide. Even the World Economic Forum predicts 14 million jobs will disappear globally over the next five years, partially due to the rise of AI technologies.

Really? 

Look, large language models are a great leap forward for AI. They have an amazing ability to write and display an understanding of information. But perception is not always reality. 

Awkward assistants

In 2011, the IBM Watson supercomputer scooped up a $1 million prize on the American TV show "Jeopardy!" besting champions Brad Rutter and Ken Jennings. IBM donated the winnings to charity. The two men would go on to win future games. Watson "has been reduced to a historical footnote," The Atlantic reported in a May 5, 2023, essay, "America Forgot About IBM Watson. Is ChatGPT Next?"

By 2014, Amazon launched Alexa, which amazed everyone with its ability to understand speech. But once the novelty wore off, people became disinterested in structuring sentences in specific ways to get the right result. Less than two years later, Google introduced Google Assistant, which was better at understanding verbal questions and searching for answers. Yet even today, it is as likely to produce meaningless results as it is to help.

Incorrect answers

For insurers, these technologies are just not good enough today to be significantly useful. When claimants reach out to insurers, they want information that will advise them on major life decisions. Finding out later that the information was incorrect – or incomplete – is a big problem. Bad enough if the human agent gave the wrong information; possibly more legally hazardous if the chatbot gets it wrong.

AI chatbots were supposed to revolutionize customer support, but in practice, insurers and other businesses have found them very limited. Too often, the chatbot ends up handing the case over to a human for resolution.

Alexa, Google Assistant, Siri, ChatGPT, and others in the chatbot family are good enough to help people create drafts, but they're not good enough for insurance. If COVID taught us nothing else, it was that humans need humans. People need to talk to brokers, to insurers, to agents who are product specialists and can provide advice with empathy.

One alarming aspect of ChatGPT – hallucinations, or factual errors – should give everyone cause for concern. Contact center agents are expected to not have factual errors. Yesterday's AI contact center might have had hallucinations, but most of the time, they were easily spotted. Today's ChatGPT cannot only be wrong, but it will create fake information and pass it off as accurate. Unless the audience are subject-matter experts, customers will believe ChatGPT's convincing argument.

How did ChatGPT get these persuasion skills, to make both good and bad information look alike? It was trained on the internet, an impressive source of information and disinformation. 

Exclusive insurance solution

"Wait!" comes the cry from the insurance world. "All we need is a ChatGPT that understands our own insurer information and ignores the rest of the internet!" One would think that would address the problem — but think again. True, there are people who are working to give ChatGPT specific information in certain topics against which it can answer questions.

The trouble is, while this additional training data increases ChatGPT's knowledge, it doesn't turn off the original data that ChatGPT acquired in its initial training from internet sources.

I have found that ChatGPT can work well when all you need is a single answer to a question, but the chatbot is prone to simplification if there are multiple answers. For example, it can tell you where the Taj Mahal is, and even throw in some interesting fun facts. But when I asked ChatGPT to digest a policy document, it confused coverage levels and restrictions on different benefits. It tends to see the document as a single set of facts, selecting the first one that seems to match the question. It doesn't notice when there are multiple answers that are different, depending on the product, coverage, state, or other factors.

Google's Med-PaLM2 is an LLM that has been designed for the medical field. Lexis-Nexis has announced Lexis+AI for the legal profession. Rushing to invent a purpose-built insurance LLM doesn't take into account the industry's need for confidence in a model that has been reliably trained. 

Government guardrails

Microsoft, an OpenAI investor, is rolling out ChatGPT into its products (as Bing Chat and Copilot), so millions of people will be exposed to both the technology's capabilities and its limitations. This may turn out to be a blessing, as more people worldwide become familiar with how to manipulate the technology to find accurate information or to perform rote tasks.

Meanwhile, as governments and regulators are discussing generative AI, Italy recently decided to ban ChatGPT over privacy concerns. The EU is creating the AI Act and the U.S. Congress recently heard from Sam Altman, CEO of OpenAI. Most governments are keenly aware that AI poses significant risks if allowed to operate without legal guardrails, like the lack of regulation for social media created problems.

Government policies aside, here's one suggestion: could the industry come together to train an LLM on trustworthy industry sources, such as the training material we use to bring new professionals into the industry? With enough reliable material, an insurance LLM could pass insurance exams. That would be a huge first step.

We are certain LLMs could revolutionize how insurers communicate with customers and business partners. Whether that happens tomorrow – or in three years' time or 30 years' time – is another question. As most readers know, these technologies move quickly, until they hit a wall, and then they can stall for years or decades.

The pace of LLM innovation is impressive. Whether ChatGPT pans out for insurers, and for the industry as a whole, depends on timing, some luck, and calculated decisions on what type of strategy each company chooses to pursue. More likely, the wider LLM landscape will yield opportunities to change the game over time.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Customer experience Politics and policy
MORE FROM DIGITAL INSURANCE