Track 7: Using intelligent decisioning engines across the insurance value chain

In a competitive market it is vital for insurers to differentiate themselves from the competition. The best way to improve efficiency and enhance the customer experience is via intelligent decisioning.

An intelligent decisioning engine is the ability for AI to ingest both structured and unstructured data and use it to make a decision.

This session will explain what is intelligent decisioning and discuss the potential use cases across the insurance value chain.

Key Takeaways:
  • What is and what is not intelligent decisioning
  • Challenges with AI and intelligent decisioning
  • How intelligent decisioning is being used throughout the insurance life cycle from underwriting to
  • claims to enhancing the customer experience
Transcript:

Martina Conlon (00:11):

All right, why do not we go ahead and get started. Good afternoon everyone. I am Martina Conlon and I, I am with Aite-Novarica is a Research and Advisory Firm focused on Insurance, Technology and Operations, and I lead the PNC practice at it. And so here, we are here today to talk about using intelligent decisioning engines across the insurance value chain. And so I am lucky enough to be here with, hold it. Alright, that is not,

Paul Bessire (00:43):

There we go.

Martina Conlon (00:43):

There we go. Paul, Paul and I are here to discuss and review a bit of research that we have as well as discuss about Paul's experiences at Coterie. Paul, do you want to introduce yourself?

Paul Bessire (00:56):

Sure. I am Paul Bessire, Chief Data Officer Code Reinsurance. We are a MGA focused on the small commercial space with autonomy over pricing and we do servicing and claims as well. We are about a $50 million book today and growing, which is good. And my role is that I oversee both the flow of information, which was more so the previous conversation and then this conversation, which is what do we do with it? Our primary focus and our kind of our core product with the focus on speed and simplicity and service is two question quoting with as little as just knowing business name and address. We can get to a bindable quote within 20 seconds in the small commercial space. My background very quickly, I am not from insurance. I have been here for, been in this industry for two years, all of it with Coterie. Spent 15 years in predictive analytics in sports, about four years in management consulting. Knew the founders of Coterie pretty well and loved the use case. Got tapped on the shoulder in March of 2021 to jump on board and have been trying to solve this problem ever since.

Martina Conlon (01:59):

Great. And as I mentioned, I am Martina Conlon and I am with Aite-Novarica. I have been with Aite-Novarica for about 15 years where I have been lucky enough to work with about 150 insurer clients, helping them make better, faster decisions around insurance technology as well as with our 60 to 70 vendor clients and helping them develop products to really offer the best to our insurance community. And so I apologize for those of you who I did not realize when I was signing up for both of these sessions, that they were one right after the other. So there may be an overflow of people who were here previously, but there is some repetition of some of our research slides here, so I apologize. But insurance is easy. There really are three things that allow you to deliver value in insurance and where you can deliver profit. It is either sell more, it is manage risk better and cost less to operate.

(02:59)

And each of these levers of value really have to do with their strategy, strategies behind each of them as well as technology behind each of them. And so again, the insurance industry is pretty straightforward and typically business goals are oriented around those three things. In 2022, at the end of the year, we surveyed our client base and our research council around what are the top prior top capabilities that the business wants IT to deliver. And in this case, as you can see for 2023 here, that BI and Analytics is a top priority. So delivering them predictive models, delivering the data that is absolutely key for them to understand their market and understand what types of products they should be delivering. Other items that top of mind distributor ease of doing business, which that of course has to do with not just underwriting the business that your distributors want to bring you, but also doing it very quickly.

(04:07)

Increasing levels of automation, turning around doing in reducing turnaround times, which analytics is incredibly important as part of that. And then finally, reducing operating expenses and optimizing internal processes. Those are both key. Analytics are very important for both of those in terms of understanding what your KPI's and your productivity metrics are. Being able to understand if you are improving or not improving, if you are going to be able to grow your revenue without having to grow the number of people in your organization so that you can scale your business and increasing the level of automation that you have. And so predictive analytics are key to really delivering those types of capabilities.

(04:55)

So this is a very quick depiction of the insurance value chain. It really goes from product development all the way through to paying claims and as finance departments like to do counting the money at the end. So in terms of when we look across that insurance value chain right now, there is an awful lot of activity with intelligent decision making, predictive models, intelligent engines and decision engines in the Underwriting, the policy services space and the claims space for Underwriting. Certainly it is natural in allowing some level depending on what market that you are in, whether it is small commercial, middle market commercial, large commercial or personal lines, the level of automation that you can auto underwrite a policy in personal lines, this certainly has been accomplished. It is becoming more common in the small commercial space, which Paul will be able to talk quite a about. And certainly there is a level of pre underwriting though that can be accomplished in all lines of business through these intelligent decision engines that may stop short of issuing a policy, but certainly lift a major portion of the data gathering, the simple analytics, the applying basic appetite and underwriting rules and making recommendations to the underwriter around what kind of discounts you can offer, what kind of coverages, what kind of limitations you may want to put on really assisting.

(06:32)

It is a real underwriting, underwriting assistant, not the person, but the assistance to the underwriter and guidance to the underwriter. And then finally, in claims, we know that already there is a great amount of activity around fraud detection, referring things to a special subrogation unit or special investigations unit. So predictive models are already heavily used in the claim space and will continue to develop as the AI technologies make those, supercharge those capabilities really and allowing more and more claims to be straight through processed or more streamlined processed.

(07:21)

And so just to talk a little bit about the evolution of underwriting and how data and analytics has so significantly impacted our industry. So this is looking at from the nineties until now, really what at a super high level sort of the activities and how the locus of the focus areas have changed over time and the amount of effort that has to be put in has changed. So we look back in the nineties and before that certainly we had data collection from agents and via documents, whether they were PDFs or mailed snail mail documents where traditional underwriting processes took place, where there was a lot of time spent gathering the information. And this is where the underwriter assist role became so important in order to gather the information and enter that information into your quoting system or your policy system, whichever you had at the time.

(08:25)

There was sometimes spent on validating and then sometimes spent on analyzing, but you only had so much data that you could analyze back then in introducing in the late nineties and early two thousands, the proliferation of third party data that was suddenly available through lots and lots of different sources suddenly suddenly made the task of gathering data quite simple and straightforward to some degree in most lines, not all lines. And there was still validation that was necessary, but suddenly we also had a lot of data to actually consume. And so the challenge became the underwriter needed to be able to balance and interpret lots of data points when there were underwriting of policy. Well, luckily, as we go into the two thousands, presentation tools have become better allowing an underwriter to have a visual of understanding of the risk associated with a prospect visually portrayed.

(09:33)

We had the introduction of predictive models that became very, very common. And so analytics made a huge difference for either the assisted underwriting process or the complete straight through processing process. And then finally, as we are where we stand now, we have AI enabled technologies that what used to be property information that we would gather from LexisNexis or Verisk is now generated by Insurtech companies that are doing image analysis and property intelligence and delivering integrated property intelligence to us that it is super charged again, the gathering process, the integration process to allow us to more readily and quickly underwrite that policy, reduce the turnaround time back to the agents based on the availability of these tools and the data and the technologies. And so if you were in the prior session, if you noticed the Travis from Berkshire Hathaway was talking about those data sources having instant information and accurate information available so that they could quickly turn around their transactions to their large commercial company, their large commercial customers, this is where the availability of this data, the speed of delivery, the interpretation and the analytics around the data are making a huge difference.

(11:02)

And we still have straight through processing going on. It is still ever present and it is going on in underwriting. It is becoming more common in claims. And so that level of automation and all of those technologies that we talked about, the third party data, predictive models, integrated data, image recognition and image interpretation and intelligence out of those types of things is playing key in order for us to grow the window of what can be straight through processed. So in terms of, I feel like we should probably stop there. Do you want to talk a little bit about Coterie and Yeah, sure. A little background on what you are doing, and we have some underwriting and lost trends, but I want to make sure that we have enough time to hear your story.

Paul Bessire (11:52):

Absolutely. And I like this topic. I think that Coterie fully embraces intelligent decisioning really to its fullest extent, especially at least at the underwriting stage of the value chain. I think claims servicing and potentially even lead gen kind of working back up the chain after the factor areas in which we will be doing that. And if you do want to jump just past this, actually one more slide, I can, did you want to do the video? We will do the video in just a second. We will jump right to it. If you are a competitor or want to be a competitor, you can take a picture of this because this is exactly how our IP works, and I will get into it in a second. But there is two things that we do that, two things I wanted to note that are intelligent decisioning and one that is not, there is, at least as it relates to codery specifically, one thing that is not intelligent decisioning is somebody looking at data in between receiving it and making a decision being made upon it.

(12:51)

And that is, and the best way to articulate what it is, at least in Coterie with respect to underwriting, is that we have one underwriter, we have one underwriter, we have a 50 million book. That book will be a couple hundred million dollars within a couple years. Thank you, John, VP of growth Distribution, if you are not familiar with that. But that will all happen. And yet we only have one underwriter. Not only is that the case, but we are also doing something that I did not realize coming from outside of insurance that is unique and that is at least is relatively unique, especially in the commercial space, and that we are actually underwriting the risk itself. We are paying very, very close specific attention to exactly what that specific risk is. We are a brand new company which makes the ability to do that makes sense, right? Because we do not have legacy systems or legacy thinking or a legacy book that is 80 plus plus percent renewals where we can rely on pooling and aggregation and things that are typically considered core tenets of insurance.

(13:49)

For us, we need to make sure that we can price appropriately and in order to price appropriately, we need to be as informed as possible about that policy itself or what that risk looks like. Well, the beauty with that and what intelligent decisioning is, is we could be incredibly informed about a risk and literally have somebody who manages a set of rules or heuristics or models or interacts with the models that Martina kind of alluded to that we are now capable of addressing that never has to look at one individual policy and is still theoretically underwriting every individual policy right down to its risk. So that is one of the ways in which we are, we are thinking about int intelligence decisioning what it is. It is not. The other thing I would mention, and I go to a lot more data centric conferences than maybe this one is, but there is a lot of talk about data, AI, Chat GBT, etcetera.

(14:40)

All of it is relevant, but even in most of the conferences that I see people talking about, ultimately information is served up to someone to make a decision. That is not today's topic. And I am not saying that that is always wrong. I like to be as informed and contextual as possible with any kind of conversation I have and any discussion I have. And that is really what we are thinking about there. But by the time that we get to this point to you are not serving data up to an underwriter, you are not serving data up to executives who then make decisions on it, the decisions that need to be made happen upstream. They happen beforehand, they happen strategically. To me, if you are worried about waiting for data to come in to then make a decision about it, you have not had the right conversations beforehand about what to do with it when it comes in and the tools that will be built off of it.

(15:29)

That was my little bit of a soapbox there. Apologize for the diatribe. I will explain what is on the screen, but very quickly we have as little as two questions that we have to answer from our agents. All of our distribution strategy interacts with independent agents or aggregators and other distribution partners who are usually working with agents themselves. Almost exactly 50% of our quotes come in directly through our dashboard. 50% come in through API, that is the first part of this discussion here. Obviously we are getting a little bit more information, a little bit more thorough information from our API partners. We are just taking in whatever they consume, but at the point in which somebody gives us a business payment address, we kick off roughly 12. It will be a little bit more here with a couple vendors that we are adding vendor calls with some intentional redundancy that try to answer every question that we need to answer.

(16:22)

Those questions usually fall into one of three buckets. I know Martina and I were talking a little bit about this as a potential follow up question. I will jump into it right now. What does the business do? How big is it, which is really the exposure and what are the risks associated with where it is, I would say that we are really good at one in three, if you guys have anybody that that is good at number two, which is what is, how big is the business when we are talking about private small companies, I would love to have a conversation about it, but we try to identify all of that and once we have kind of requisite information for that, we have that policy or that potential policy can go through an evolution of classification, of evaluation, of risk, of recommendations, which is something you mentioned earlier, which I love.

(17:07)

It is important, It is part of underwriting to me on what the coverages really should be, even with an informed buyer such as our agent strategy. All of that comes together. We have some discretionary pricing and ultimately we have a rating. All of that comes together in less than 20 seconds. 65% of the policies that we bind started a quote in less than 24 hours. It is a completely different kind of approach, but it is an approach that is entirely based off of it, of the expectation that we should be able to find more information about a risk from outside of the agent, that the agent then can review and confirm and ultimately has to provide to, just to clarify, has to provide to and be responsible for when it comes to the business owner. But all of that can be done in incredibly quick time by leveraging some of the intelligent decision tools.

(17:53)

Predictive modeling, we have a confidence model. I think I am No, we are good. We have risk scoring model. We have the ability to impute or predict when we do not get good data. We have a confidence model that helps us. And this is a bit of an answer, at least talks toward one of the questions from before that gets to that tries to address which data sources are best at which things. And that is a really interesting thing because some data sources are really bad or just junk information in some cases, but might be really good at some things. But we are using ML to figure out not just what is the best data, but also what are the best sources to use in which cases. And that is why we have so many data sources in that situation. That is my quick run through. I know we have some other things that I am sure we can, Nope, nope. Oh no, that was my run through. Nope.

Martina Conlon (18:39):

You can take your time.

Paul Bessire (18:40):

For sure. No, no. But this is genuinely, that is the pitch is Martina did great job covering overarching where we are, especially I would say personal ahead of commercial lines from what I have gathered, but from a commercial perspective for our specific use case, we are using int intelligence decisioning throughout the underwriting process. And that underwriting process though can take and is intended to take 20 seconds from less.

Martina Conlon (19:08):

Does anyone have any questions? All right. Well, I have a question. Oh, please.

Audience Member 1 (19:17):

Kind of general, and I am kind of new that industry, if are insurance companies under any obligation to, Sorry, are insurance companies under any obligation to serve minority owned women and minority owned businesses? And does that have to be kind of figured into your model?

Paul Bessire (19:39):

I will make one quick, you say, yeah, we both will probably have answers on this, go ahead.

Martina Conlon (19:42):

So certainly the industry is heavily regulated when it comes to consumer protections and things like that. I do not believe that there is any regulation around that. We have to give prefer preferential in treatment to any types of businesses other than by assessing the risk in many lines of business. We have to file what our underwriting rules are and specifically what we are going to charge them and what types of things that we are going to, information that we are going to gather. And so there is certainly restrictions on the industry. However, I have never heard of any of my clients talking about the form of ownership of the business. It may be an internal underwriting rule because they may find that women owned businesses are better, but I do not believe that is anything that there is a regulation around. Does anyone else on in the audience have any thoughts on that?

Paul Bessire (20:44):

My quick follow up and because this is the fourth time I have said this today, and so it just shows you how hot the topic is. The, there is one thing that keeps me up at night one and relative to my job, and it is, how do I combat 400 years of systemic bias that is inherent in some piece of data that I try to protect against, that I can not find that I do not know is in there. That is the one thing that continuously, and there is an altruistic piece of that, which I hope is coming through in how I am explaining this, but there is also the regulatory piece, which at any point in time, any piece of information that we may be capturing and using may be, we may ultimately be shot down by, I mean, we are interacting in all 50 states. We actually are in Canada a little bit as well. And so you are interacting with 51, 52 different governing bodies at this point. Anybody can say no at any point in time to anything that we are using. So both of the combination of each of those things I really try to protect against and I am concerned about, the easiest way that I have found to do it is to be very discerning about what pieces of information go into the model and that we are capturing it all.

Martina Conlon (21:56):

Oh, is there

Audience Member 2 (22:05):

Data introduce bias

Paul Bessire (22:09):

For sure. No, that is a tremendous point. And actually fair. Yeah.

Audience Member 2 (22:17):

I just question, I talk a little bit about how

Paul Bessire (22:21):

Sure build,

(22:25)

Yeah, so we cheat and the reason, and that is my kind of kidding way to say that. It is not everybody internally that we are leveraging, we are leveraging our agents as well. So we have 12 is vendors. Many of them tell us, are trying to tell us the same thing at the very end of a quote. We will stand by binding it because we are comfortable with what those inputs look like. But once a quote is presented to an agent, the agent has the opportunity to modify the inputs, which largely includes rating variables. So we think payroll, sales, number of employees, square footage when we do bop, so we have BOP and GL as our main focus. And the agents pretty consistently changed some of those inputs. And what is funny about what, and so we are evaluating, trying to evaluate currently the agents as the source of truth, which everybody can laugh at if you want to for now, because I understand the fallacy that is baked into that.

(23:33)

But we use the agent as a source of truth to evaluate our data vendors. But we give a allowance, kind of a buffer allowance to kind of arrange as to what the value that the agent debtor's in versus what we receive back from a vendor. And we score them and we score how frequently we get back a piece of information from a vendor how quickly, because that is actually really important to us. We get back a piece of information from a vendor, how close it was to what the agent said, if it is changed, if it is not changed. And ultimately we are only serving up one number. Think of it as like pre-fill on steroids because we are also baking it directly into our underwriting in real time. But we are ultimately only serving up one number into the policy, even if we get it from four different vendors and it is four different numbers, we have to pick one. And so we've done a really good job, I think, of handling that. And when I say we cheat, it is not because we had four or five people internally trying to work on this literally over a year before we launched the product in last July. And then immediately had 20,000 agents, five, four or 5,000 of which have been active in interacting with us on that dashboard, actually pushing buttons and helping inform our ML. And that is just true. It is ML in its truest form, having our agents train it.

(24:53)

In terms of evaluating which vendors we should trust in which situations not in terms of the actual final premium.

Martina Conlon (25:01):

So I do have a question in terms of how are you using AI in all of this process at runtime versus at runtime versus when in development and determining what your models are. Sure. And are you leveraging any self-improving models?

Paul Bessire (25:18):

And all the questions follow up to you for you as well, but thank you. No, the term AI is both a hot topic and has been bastardized, so I am not going to have a perfect answer for every point in which we use AI. To me, every one of those five boxes on there is a version of we have an version of artificial intelligence that is baked into it. I have not explained all five really. The only one we just went into was the confidence model, but we have translation layers, we have predictive layers, all of that just to come up with one final answer. And all of that kind of ultimately is trying to learn from itself. But that is not the question you are asking in terms of the model that learns from itself. It is actually the best example of it is what I just alluded to is this confidence model where initially trained by agents, now we have vendors that are coming in that are comparing to each other.

(26:16)

And so there may be a vendor that, hey, this one we generally trust, call it vendor A. We generally trust vendor A frequently, well B is not always identical to A, which is good. If it is exactly the same, then do not. That is not the kind of intentional redundancy we need. That is literal redundancy, that is a waste. But if it is always kind of closed, if B is always close to A, now we can start to bump up B without even having to have that input. Now we have that true AI model where literally the vendor responses are reacting together within something that no one is ever touching and no one ever has to validate other than compared to themselves. We are working more toward the evolution of this, which is all of our underwriting decisions will be automated and then will be based off each off of what model itself has learned from previous iterations. At this point, there is not, that is not where we stand just because we have such a low volume of loss history that we have, there is interaction that is required that is needed from both our underwriters and our data scientists. But over time, we will work toward a point where our risk scoring, modeling what we think that risk actually is, will set the price without any kind of even a heuristic that is written by an underwriter.

Martina Conlon (27:40):

So in terms of if you are focus, if the model is improving itself as it operationally is in existence, then it has to be somewhat limited on what kinds of things that you can have at modeling. So confidence of a data source would be one thing. However, the level of risk is not necessarily something that you can do with admitted lines in commercial markets.

Paul Bessire (28:06):

States fair? Well to some degree, but

Martina Conlon (28:08):

In many states and things like that. So,

Paul Bessire (28:10):

Well, I would Or

Martina Conlon (28:11):

Have you determined otherwise? I will

Paul Bessire (28:12):

Back a step off of that because I do think so in the way you articulate, it is perfect because with self-training models, it is easy or that improve self-improving models, excuse me, a single flat or a single output is much easier. So for instance, accurate. Yes. No, that is an easy one for, you know, have several variables that are kind of variations of accurate that are kind of working up the chain with the confidence model. The way I look at it is predictive frequency of a loss or a claim that is a singular, I know you can have multiple claims within a year, but does claim or not is a singular binary. Theoretically, you could even consider it a classification model for the data science nerds in the house. But that is a binary outcome where we can start to predict that now, severity and loss control and mitigation and multiple claims, those types of things become much more difficult to train and run into far greater risks of some of the regulatory concerns that we talked about before.

(29:16)

But a model this we are doing today, we do have models in place right now that are rank ordering. All we are spitting, one of the inputs that we have is likelihood to have a claim that we are driving a lot of decision making off of that predictive one input, one in, sorry, one, it is an input into some of our models. It is also an output of a model that happens within that 20 seconds. Okay. Question to you. I have got a couple, but would love, I know we are wrapping up. No, we are at time. All right. Never mind. Does anyone, I will ask her questions afterward, but

Martina Conlon (29:50):

Does anyone have any other questions?

Audience Member 3 (29:53):

Yeah, I know we are out of time.

Paul Bessire (29:55):

Sure.

Audience Member 3 (29:56):

I just left bias. California recently posted legislation where they are saying that, anytime a decision is made in regards to underwriting or claim that a customer could decide if they would rather have a human review or to appeal it. And I am thinking about legislation like that and in terms of models may never be a hundred percent unbiased. What are your thoughts on having that preference in the hands of the customer and understanding the models might not be ever a hundred percent unbiased.

Paul Bessire (30:38):

Yeah, my quick take would love your take as well. We built our entire approach with the ability to toggle on and toggle off. Now we currently only have one underwriter, so there is a capacity constraint right now where I would be concerned about addressing this long term. If that becomes the propensity, the insured is propensity to look, have a human look at it increases. So meaning a lot of our strategy might have to change, to your point on our models always going to have some level of inherent bias. I think the answer is objectively yes. But what is funny is literally they are an objective representation of what exists and the data is a source is representing truth. And so I would have a hard time with any, if this is part of the question, I would have a hard time with any argument, we will comply with anything just to clarify that we need to comply with. But would have a hard time with any argument that states that a model is more biased than the underwriter who then would, or the individual who would then take consume that would have to look at that. And so I am comfortable, much more comfortable in all respects with a modeling approach versus an individual. But we have built that in a, we have built our tool in a way in which we can toggle completely off the model and have an individual either review or set rules around how that underwriting works.

Martina Conlon (32:10):

I think there is going to be the introduction of a lot of eligibility rules that say if they are requesting manual underwriting, they are not eligible for our products. But all right. With that, thank you everyone. Have a great afternoon and we appreciate your time.