WTW's pricing expert talks about AI impact

Willis Towers Watson's U.S. headquarters in Virginia
Willis Towers Watson's U.S. headquarters, 800 N. Glebe Road, Arlington, Virginia.
Cooper Carry

Duncan Anderson, global technology leader for insurance consulting and technology at Willis Towers Watson, has worked for over 30 years in insurance pricing and analytics, starting in the business as an actuary. With WTW, Anderson and his colleagues advise about 1,000 insurers on strategy and technology, in addition to providing Radar, a pricing, claims and underwriting software suite. Digital Insurance spoke with Anderson about how he sees the current landscape of development of artificial intelligence and machine learning, as applied to pricing. The idea that people overestimate what can be accomplished in a year or two and underestimate what can happen in 10 years, often attributed to Bill Gates, resonates with Anderson as applicable to insurance pricing and AI.

What are the issues for pricing in relation to AI technology advances?

Duncan Anderson of Willis Towers Watson
Duncan Anderson, global technology leader for insurance consulting and technology, Willis Towers Watson.
Pricing consists of four different areas: analysis, decisioning, deployment and monitoring. Technology has changed each of those in different ways. With insurance pricing, analysis is understanding the risk with the likely cost of claims. It's also about understanding policyholder behavior on personal lines. Today, insurers worldwide have at their disposal a very rich, powerful toolkit of machine learning models that can very quickly, very easily produce highly predictive models. 

With that powerful prediction comes issues with interpretability, because a lot of these models are quite hard to understand. That can be a big problem in insurance for two reasons. Firstly, unlike marketing or other functions in insurance or other industries, where it's okay to have an 80-20 model, if you misprice insurance business, you can lose a lot of money very quickly. When things change, as they did during the COVID pandemic, some insurers are sharp at noticing that. Others are slow. Some relied on clever machine learning models calibrated pre-COVID that weren't as good post-COVID.

There's a wall of regulatory issues to be tackled. There's 50 flavors of U.S. regulation. There's now quite a bit of pricing regulation to adhere to. For that, not only understanding what your models are doing, but also being able to explain them is really important. 

There's been less change in deployment from a technological perspective. But given all this modeling, it's important to scenario test what you want to do, and what's the best thing for the business in underwriting, pricing or other actions in portfolio management. It's important to construct a calculation that predicts as accurately as possible what might happen. 

WTW developed proprietary machine learning models that are interpretable by design. We have patents pending on interpretable machine learning models that fit just as predictive models, but are transparent in that you can see which factors explain the risk and the behavior, explain very clearly what's going on, are much more transparent and manage the models much better as a result. 

Once you decide what to do with pricing structures, claims, underwriting rules and case triaging rules, that has to be deployed into the real world, into a policy administration system. Increasingly now, technology has enabled very complex things to be done at the point of the sale. But perhaps most importantly, maybe helped by the adoption of cloud computing, many systems out there are much more interoperable, and play more nicely with APIs, allowing calls from one system to another, and a componentized approach. That enables the deployment of deep analytics, undiluted by errors, without the costly process, so you can now deploy much more quickly. You can respond very quickly to developments in the market.

By actually analyzing whether it matters, we can identify proactively, if something needs attention, and if it does need attention, automatically to identify perhaps why. So that allows managing models to happen more easily, but also just wider portfolio management. 

This automation removes the dross from an expert's life, and enables them to do what they're good at, which is thinking and bringing insurance experience to bear. None of this is about replacing the expert. It's about empowering the expert. The insurers that will win are those that embrace technology developments and analytical tools, and use them effectively, understanding insurance and keeping their eye on the ball. Problems from not spotting inflation or models gone wrong come from inexperience in insurance management, overreliance on models and approaches that were not fit for purpose. Empowering the experts so they can be experts, is a big theme that we mustn't forget.

Is the industry overestimating where machine learning and AI will be in two years? Will it take 10 years to get to what is envisioned?

It varies a lot by market. Many U.S. insurers and U.K. insurers have really mastered machine learning in many different ways. The use of machine learning to create powerful models is quite mature. But embracing interpretable machine learning and embedding that is at the earliest stages. The automation of monitoring is within the two-year period rather than the 10-year period. Machine learning, open source and cloud computing are around. They're maturing, but we're at the beginning of marshaling that into an expert state and using it to best effect. We have a particular model that's really good and well understood, but there's still quite a way to go to becoming a real master of this.

Is the interpretive machine learning that you describe the same thing as generative AI?

It is a little bit different, and it's probably less exciting, because it's slightly lower level. It's a bit more like a mathematical model. But actually, that's what you want at the claims modeling level. Interpretable AI, you could think of it a little bit like being a co-pilot. If you have an interpretable AI model or interpretable machine learning model, you can either use it directly, or you can use it to guide how you build a traditional model. I'd never heard the word copilot until a year ago or when it became fashionable with generative AI, but you can think of that as being a guide. 

How will generative AI play out in the insurance underwriting and pricing space?

It is a little bit different. It is a different technology. It's probably a simpler technology. But it's still quite a useful and pertinent one that's very, very valuable in the very narrow use case of insurance, pricing and underwriting. Insurance is a funny game, because if you get things wrong, bad things happen, and other industries are not always like that. With the regulatory issues, it's a little bit unusual. 

In my world, developing insurance technology, we do see two big things with generative AI. I'm really interested in using it as an internal tool to make our engineers more productive. We see potential. It's unproven yet. We see potential material improvement in productivity from using generative AI and different engineering development tools to build things for us. Generative AI can help create tools, but also help on reporting, interfacing and problem definition. 

How does machine learning or AI change risk pricing for personal lines or P&C?

For insurers in an efficient market, it may not dramatically change the consequence for the customer, because the customer can always choose the cheapest quote that they like. For the insurer, it continues the pricing arms race, in that you have to now be very, very good at segmenting risk as accurately as the next company. Because if you don't, you can suffer. WTW increased the sophistication of the way it assesses risk. Those that are slower to adopt the new techniques are always going to suffer as a result.

The adoption of machine learning continues – maybe not an "arms race" – but competition. Interpretable machine learning is probably just the next thing that needs to be done to ensure that the models remain fit for purpose whilst navigating the regulatory landscape.

What may happen in these areas in the next year or two?

More automation of monitoring and greater adoption in commercial lines. Who knows exactly where generative AI will take things? At the moment, it's relatively nascent. The areas that I've identified are the areas which at the moment seem to be the most fruitful. There's a huge amount it might do for insurers in operations – customer chatbots, producing reports, analyzing claims conversations and the like. Underwriting and pricing is the most purely mathematical part of the whole chain. Therefore, maybe generative AI will be brought to bear a bit more in the other areas.