AI, machine learning offer great opportunities and major risks for insurers

The promise of artificial intelligence in the insurance industry is threefold:

  • To help insurers get deeper insights into their business and make better decisions.
  • To automate insurer processes and decision-making for increased efficiency and reduced cost.
  • To create a better customer experience that provides more immediate responses and requires less arduous data collection.
DI-genericAI_07122017
Parts for an ESP (electronic stability program) console sit in a premould machine inside the Robert Bosch GmbH manufacturing plant in Hatvan, Hungary, on Tuesday, August 23, 2016. Chief executive officer Volkmar Denner said Bosch has no plans to scale back U.K. capital expenditures and that it was too early to determine the impact of Brexit on the business. Photographer: Akos Stiller/Bloomberg

But with that promise comes risk – perhaps greater than the risks inherent with all new technologies – that the insurance industry needs to consider. The Financial Stability Board released a detailed report this month on the potential upsides and downsides of AI, most of which align with Novarica’s research of the technology.

Risk one: Reliance on external players

Many of the risks highlighted in the report stem from the increasing reliance financial services companies will have on outside technology companies for key business components. This results in several problems:

  • It creates an environment where the regulatory bodies overseeing banking and insurance may not have the same influence or access to those critical tech companies.
  • Outside players providing technology used for decision-making means limited auditability after something has gone wrong.
  • A single point of failure creates problems for the whole industry; if one technology player is hacked or goes out of business, it can impact a large percentage of financial institutions at once.

Risk two: Opacity of decision-making

Another source of risk is that the results of AI and machine learning may be too complex for humans to fully understand. As the FSB report puts it, “New trading algorithms based on machine learning may be less predictable than current rule-based applications and may interact in unexpected ways.” The FSB report is focusing mostly on how such opacity might mean volatility or instability in the financial markets due to AI-based trading, but this applies to the insurance industry as well.

Existing regulatory rules in the insurance space make this risk less likely in some areas of the business. Because insurers of admitted lines must file their rates with the state, there’s a limit to how much AI will be able to influence actuarial models. Insurers may use AI behind the scenes to determine new variables and tables, but that information will need to be translated into traditional rate models. Many insurers use predictive analytics in a parallel way, reducing scores into broad pricing categories (e.g., platinum, gold, silver status) for the same reason.

Another example is an insurer that piloted the use of AI for determining direct marketing campaigns. After IT ran its data through a neural network to come up with demographic priorities, the marketing organization ignored the results. That marketing team saw AI as a black box and didn’t understand the prioritization. When IT instead reverted to a simpler model with scores in fewer key factors, it was seen as a success, and the marketing team adjusted its direct marketing approach. Despite the less sophisticated model, the visibility into how the algorithm came to its conclusion meant humans could understand and buy into the results.

A combination of regulations and human resistance will likely slow down the adoption of pure AI-driven decision-making at many insurers. However, that won’t hold it off forever, and insurers will need to carefully watch areas where AI technology means humans no longer fully understand why certain decisions are being made.

Risk three: Implementing bias

An important finding from the report is worth repeating in full. “Even if innovative insurance pricing models are based on large data sets and numerous variables, algorithms can entail biases that can lead to non-desirable discrimination and even reinforce human prejudices.”

Machine learning algorithms utilize historical data to make future decisions. If insurers train AI systems using past decisions made by humans, then any human biases inherent in those previous actions will be part of the training. Without careful review, it’s possible (or likely) that AI systems will learn and unintentionally magnify human bias when making future decisions. Insurers, with their large stores of data – and their impact on human lives – have an obligation to work against such biases. If humans cede responsibility of decision-making to algorithms, it’s important to recognize that it does not absolve the industry of that responsibility.

Risk four: When AI is too smart

One existential threat is that AI and the access to big data may result in insurers getting too good at predicting risk for individual applicants. The insurance industry works because of the nature of a risk pool, where overall premiums cover the percentage of people and businesses that file claims. But when predictive abilities reach a certain level, the risk pool breaks down. Will AI become so intelligent that it creates entire classes of uninsurable people?

This is already a problem in the health insurance space, especially in terms of pre-existing conditions, which is why the government is involved. Insurers need to be wary of other lines going the same way, where suddenly state or federal regulators have to step in to make sure certain drivers aren’t excluded from auto insurance or certain businesses from liability coverage.

Insurers have a responsibility to do the best job they can in determining risk and setting rates. As such, it’s unlikely that the industry will hold off on applying new risk-predictive technologies when they become available. But more than any other industry, insurers hold their overall mission to help people and businesses above pure profit, and this threat is one they will eventually need to grapple with as AI technology matures.

This blog entry has been reprinted with permission from Novarica.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Automation Vendor management Marketing Predictive analytics
MORE FROM DIGITAL INSURANCE