Opening Remarks & Prioritizing Agentic AI Use Cases and Implementation in Insurance

Transcription:

Patricia L. Harman (00:14):

Welcome to the Digital Insurance Virtual Summit at Agentic AI and Insurance, the next wave of enterprise transformation. I'm Patty Harman, Editor-in-Chief of Digital Insurance, and you're going to find our sessions informative and helpful as your companies navigate the emerging world of agentic ai. Artificial intelligence has been around since the 1930s when British mathematician Alan Turing, introduced many of the concepts upon which AI is based. But it was the advent of ChatGPT in 2022 that brought AI into the mainstream and made it an essential part of our daily lives. Now, the insurance industry isn't necessarily known for being on the cutting edge of technology adoption, however, that reputation is definitely changing. As carriers partner with insurtech and adopt a host of new technologies as they digitize their processes and even insurance technology companies themselves are changing how businesses operate today. Over the past years, we've seen the implementation of artificial intelligence into various aspects of the insurance ecosystem, from underwriting to claims to risk management and more.

(01:27):

And its impact is being seen across every line of insurance from PNC to life to health and into cybersecurity and specialty lines. We started with ai, we moved into generative ai, and now we're focusing on the next iteration of ai agentic ai, the most autonomous, adaptable, and potentially transformative with this technology. While this technology holds great promise, there are definitely challenges in its adoption and implementation. Carriers and brokers have long operated with the aid of multiple legacy systems, handwritten notes, Excel spreadsheets, and a plethora of data and multiple formats. Synthesizing this information into a format that can be used by agentic AI is a complex process that will not be easily accomplished. Ensuring that the data is accurate and unbiased are other factors to consider and protecting the personal information of policy holders and proprietary company data is a primary concern for any company adopting AI into its practices.

(02:38):

However, choosing to take a wait and see approach is not an option. Companies that delay their digital transformation and the implementation of a AI into their practices will fall far behind their competitors. So with all of this in mind, let's start our discussion. Our panel is going to look at prioritizing agentic AI use cases and implementation in insurance. And joining us today are Casey Kempton, President of PNC Personal Lines at Nationwide, Evan Groot Global Go-To Market Director for Insurance at Salesforce and Nikhil Kansal, Co-founder and CTO at Cara. Thank you to each of you for joining us today, and we're going to be focusing on prioritizing agentic AI use cases and implementation in insurance. So let me get started by asking the insurance industry is still very early in its adoption of ag agentic ai. Where does the industry stand in prioritizing its use cases for the implementation of AI agents? And do you think it's fair to say that the industry is moving cautiously, but with great intent to leverage this technology? And Casey, I'm going to ask you that first.

Casey Kempton (03:59):

Sure. And thanks for having us. I'm really glad to be here. I think caution and insurance very naturally go together. We're in the risk transfer business and accuracy around the information efficacy of the data, how we don't sort of bloat our expenses for a period before we understand the full long-term benefits around any new technology is just part of how we operate. What's so interesting about AI is that we have an opportunity upfront to be really prepared for what's coming and move with the appropriate amount of caution. So at Nationwide, we're prepared to leverage ag agentic ai. We're on that path today. We're prioritizing where agents and ag agentic can best serve our associates and our customers, our members. But we are exercising caution. So here we take a red team blue team approach. Our blue team prioritizes how we can best use AI for our business, and our red team is responsible for thinking about all the risks and exposures that it can bring, things like cyber criminals or malicious use, and we'll continue down that pathway. As agentic AI continues to mature, I think this is true for a lot across the industry. Our AI experience from its beginnings is over a decade long as we move into agentic, we can build on how we established AI and machining learning models into our capabilities and then kind of build off that foundation as we factor agentic ai. So I mean, we're feeling good about it from where we sit positionally, and I think that's probably indicative of where some others in the industry may be seeing it as well.

Patricia L. Harman (05:50):

I love hearing how companies are in incorporating it into their systems and how they're testing it and everything. Evan, anything that you want to add to that?

Evan Groot (05:59):

Yeah, so Patty, I'll echo the thank you for the opportunity to be here. It's a pleasure to speak and I think Casey hit it right on the head. What we've really seen is experimentation across a portfolio, looking at the use cases where can value be while monitoring the testing, and can we measure the results and see the impact. And I do see just the early days, there's probably hundreds of proof of concepts that have happened and proofs of technology over a variety of use cases. And we're sort of maturing into the state now of how do we start to get the enterprise value, the promise at the end of the rainbow of what can we pull together from those technologies and those use cases that are going to drive results beginning in this year and really starting, I think heavy in next year.

Patricia L. Harman (06:42):

Okay, great. Nikhil, I don't know, is there anything you want to add at this point?

Nikhil Kansal (06:46):

Of course, Patty, thank you for again, having us all here. When it comes to ai, I think the opportunity that's presented to us at this moment in time is of course tremendous and proceeding with caution is important, but the amount of risk that you want to take is ultimately going to determine the kind of outcomes you're going to drive. And having a calculated and cautious approach is important in that, that you're not taking too much risk. But as with every technological transformation, we see our customers start cautiously, start slow, and then once they see the ROI, once they see the results start to go all in on it.

Patricia L. Harman (07:22):

That's a pretty exciting way to approach the process. Now it's important for insurers to clarify their business objectives for agentic AI use cases, and this includes tying them to strategic goals such as revenue growth, which could be tied to new customer acquisition to cost optimization, which could include automating repetitive processes, risk mitigation and innovation. From where you sit, Casey, is the industry able to tie every use case or almost everyone to a defined quantifiable business outcome?

Casey Kempton (08:02):

Ideally that's exactly what we would do. I think to start with, as we look at the challenges or opportunities that we're trying to solve and the strategy that it's tied to and the potential outcome, it's really important With emerging technology or enablement such as agentic ai, we start by defining the problem to be solved or the underlying process that we're trying to improve or the experience that we're trying to enhance. We've got to develop a clear testable hypothesis and then utilize data and user feedback to highlight the need. So long gone are the days of a solution and search of a problem with ai even more. You've got to know what kind of problem you're trying to solve for and then aligning the business case or the use case to the strategic goal that we're trying to advance wherever that might be in our business.

(08:59):

And it takes all the flavors that you just described and then identify the benefit relative to the outcome of the work. And sometimes that's going to be a clear ROI a business case, but also how do we think this is going to scale? What is the time to value? What is our ability to pilot and develop proof of concepts that can inform us not just sort of a go or no go, but can really give us the insights we need experientially with MVPs and other pilots that allow us to further elaborate on or substantiate those business cases.

Patricia L. Harman (09:37):

One of the things that I've noticed as we've been covering all forms of AI is that across the insurance ecosystem, all of the different silos than a company, you're working much more collaboratively now because it's impossible to add AI in any form to just one particular area of insurance. So that's interesting to see. Evan, anything you want to add in terms of clarifying business objectives?

Evan Groot (10:06):

I think that's well said. When we talk and your CEO speak, we talk about humans and agents working together and the value is driving both, right? It's driving productivity for the humans and allowing them, we call that augmented, right? Assisted is really the generative and predictive use cases, which is tell me what work to do or help me do my work. Agentic is do some of that work for me. So there's one element of the BVS about just helping the humans do things they're instinctively good at, which is build relationships, set strategy, and letting AI drive the value of help me look around corners, help me find unmet needs, automate the routine in the mundane. And so when we look at the BBS case, it's about looking over those two factors, both the ag agentic, what it can do from that, but also what do you do with that human ability and how do you make sure that's deployed at something that's going to drive value for the organization. That's how we think about it.

Patricia L. Harman (10:59):

Okay. Let's talk about AI adoption for a minute. How far along is the industry in terms of identifying and mapping potential use cases in these different areas? Where do you see them in terms of their internal operations, whether it's AI agents for IT support or HR processes and procurement, Casey or Evan Nikhil? Are you seeing it being used in those areas at this point?

Evan Groot (11:34):

So broadly, we're seeing it across all of the above, right? Most of the use cases start internal, especially for ag agentic. So we see early process IT organizations are quick adopters, so helping them write code or helping them do some of the shortcuts or even ticketing systems. We see a lot of agent response. That's something we use internally. We're sort of customer one here, but I see carriers kind of adopting that and then looking at the big operational buckets first. So going back a year ago, it was really about how do I take some costs out of claims and on a service because there's just such a big rock there. And more recently this year it's about how do I drive revenue? So some of those same internal processes of how do I prepare for a meeting, orchestrate a meeting sort of complete after a meeting and just beginning the experimentation externally.

(12:25):

So we have a couple of customers that are working on external use cases, so help me understand my policy, help me do something like begin the process of a beneficiary change, and it'll probably be human in the loop, but even those agentic are generally human in the loop somewhere on the back end of the process. And then there's just so much drive for innovation. There's so many market forces across the different segments of how do I understand where there's an element in the marketplace where I could drive differentiation and bring value. And so we see actuaries and some of the back office folks starting to try to leverage AI for some of the models and where they might be able to go. And probably the last is just around the decision support, and that is underwriting. The industry for PSE is returning to profitability. Life and annuity has been growing pretty steadily. But how do I profitably price risk and do that in an efficient way? I see a lot of promise there.

Patricia L. Harman (13:24):

Casey or Nikhil,

Casey Kempton (13:26):

I have a few thoughts and then Akia, go ahead and jump in. I think you really covered it well, Evan, that is kind of the full spectrum of where we're seeing some of these tools starting to apply. In particular right around the internal. We're prioritizing generative AI there mostly so that we can really learn in the brass tacks how that agentic support can enable us. We can do direct training, we can provide support for the capabilities, gather feedback and reactions, and that's going to help us kind of lean in and test into how we can take some of this externally. So I mean, you mentioned it, Evan, when you think of claims processing, risk assessment, underwriting fraud detection and prevention, and even reporting, right? When we can ask our data questions and avoid the report creation as a task, there's a lot of power in that, how we engage with customers.

(14:22):

There's so much opportunity here to bring ag agentic AI into that transformation to deliver proactive, personalized experiences. We think about how do we use AI to maintain the correct voice and brand delivery as well as when do we escalate to a human or ask a human teammate a question such that we're really enriching and delivering as enriched an experience as we want to. The first pilot we did on this was around personal lines, claims log notes, and it's really super simple, right? We're summarizing log notes across an entire claim file, every interaction such that there's 30 log notes per claim, such that when we are engaging with a customer right there at your fingertips. So it's not agentic yet in that way, but the process of building that, engaging that, and then seeing the impact it has on our service delivery, that starts to build our confidence in broader use cases. But of course, with any of this, are customers comfortable engaging with AI agents? Do they trust it? Are there concerns about data privacy and transparency in terms of how we're helping the data make some of the decisions? So those are just a couple of the things that we think about. I think this is a huge space when we get to autonomy and how we're going to manage that. So I'll stop there. Nikhil, if you wanted to add anything.

Nikhil Kansal (15:55):

Yeah, thank you Casey. And I think you hit a couple of things on the head pretty well. At least what we were seeing, a lot of the adoption is operational for agentic ai and the ag agentic part of that is still very early. So some of the really clear use cases internally have been identified assisting people with getting some work done, summarizing, generating analysis. All techniques that have been applied in the past by technical teams are now available to operational teams. So it's really early. And I think where we're going from here is coordinating multiple systems to act as one system. So you've got your legacy systems, you've got customer facing systems, you've got analytics systems, ag, agentic, AI is really at the beginning of being able to make these systems talk to each other, work together and derive insights that would normally take someone a long time to piece together all that data. So that's really what we're seeing our customers start to use age agent AI for is to find the efficiencies that would otherwise take longer to solve on a case by case basis.

Patricia L. Harman (17:06):

I'm glad that you all discussed the fact that this is a process that it doesn't happen over at night. It does take some time. So for my next question, I'm going to ask you to kind of rate on a scale of one to five, how far along the industry is at documenting the effects of agentic AI in different areas. So I'll kind of just go down the line, I'll go to Evan, then Nikhil and then Casey. But for the processes that it's going to impact, do you feel like the industry is fairly far along and how would you rate that on a scale of one to five? Evan one is not and five being, yeah, they're really close to where they need to be.

Evan Groot (17:51):

Yeah, Patty, it'll be interesting to see how my fellow panelists, we are presenting live, but we didn't compare notes. But when I look across it, I see this is probably the most mature of all of them. There's been so much ideation around use cases. I've seen as many as 800, so much so that we're beginning to rethink it from the idea of a use case to jobs to be done. And so we're taking a more persona driven approach and then clumping together use cases as the jobs to be done and almost using a persona driven model. But there's a lot of documentation around what are the processes, how do those processes exist today? Where might predictive generative and ag agentic AI fit into those processes in a future state? And what is the maturity model? So when I think about it, of the elements, this one's probably a four, but I'm really interested to hear what others have to say.

Patricia L. Harman (18:43):

Nikhil, where do you think insurers are in terms of their processes and how they'll be impacted?

Nikhil Kansal (18:48):

Yeah, I guess I'll preface this by saying I'm bringing a little bit of an outsider's perspective. I don't work at an insurance company, I work at an InsureTech, I would probably say around a two. And the reason why is I think there's a lot of scope for full value chain incorporation when it comes to age agentic ai. So I think on an individual basis, a lot of agencies, brokerages, insurers are doing a pretty good job of evaluating age agentic AI and incorporating it into their processes. But insurance is really relationship driven. We're all talking to each other on a daily basis. We're coordinating multiple businesses, multiple entities. Insurance doesn't work on an individual basis. So I would say a five probably looks like when agencies, wholesalers, carriers, reinsurers, are all adopting a agentic AI to work together and across boundaries.

Patricia L. Harman (19:42):

That's a great way to describe it. Casey, how do you rate insurers in terms of the processes that Agen AI will impact?

Casey Kempton (19:52):

Yeah, I appreciate Nikhil, how you rated it relative to the entirety of the industry because it naturally starts in some places, particularly those areas and lines of business that are very rule driven, where third party data is a key part of the underwriting. And so there's a lot of low hanging fruit and how you capture that data, model that data, and then have that data learn from itself relative to how you adjust a claim or what have you. And you see that more in the flow line. So my answer is more kind of targeted. I would agree with your two on the whole industry, but I'm a little more where Evan is relative to our flow businesses and where we see our readiness and application to processes. We thought about it on a 1 to 10 scale. So sorry, I guess it roughly translates to maybe a 4.2 to a 4.7 depending on who I pull in the organization to just try to get some inputs around that.

(20:56):

I mean, I think obviously how do we connect internally? And you talked about the silos, right? The key to that is when you need that executive sponsorship, it's got to go all the way to the top of the organization that says this is important. Number one, it is part of our future. It's where it's going for all of us and we've got to solve it and leverage every opportunity we can, internal stakeholders, associates to outside partners, all the way to customers, and then our broader interaction model with all constituents and insurance. And so I think we're seeing that among different companies and certainly at nationwide, the alignment across stakeholders. You've got a technology component, a data component, an innovation component, the business units, everyone's going to come at this in a way. So how companies kind of galvanize around the frameworks and their decision criteria and how they're going to approach and evaluate different types of solutions and the sequencing of how we do this when everyone's kind of chomping at the bit for the next big project.

(22:01):

I think maturing in that space has got to happen as well. And probably we have really good practices around these already. It's just how do we apply that as an industry to that which is emerging. So we're already evaluating one solution and then pops up another one that's solving a different bigger problem that adds more value. And now you're having to make a trade off. Do we stick with this one and stop here, or do we leap into the next big thing to try to gain an advantage if you see that there is one there. And for me, I think for all of us, it makes it such an incredibly exciting time because we know insurance. I remember my very first days in insurance over 20 years ago, someone saying, insurance moves at glacial pace. I thought, what does that mean? Oh, glaciers are really big and they really move slowly.

(22:53):

So that I imagine has been a lot of our experience over time, but with this sitting still and waiting for everyone else to figure it out and fast follow I am seeing and what I read about the industry and the conversations that are happening, we're not sitting on our hands on this one. But again, we're in the risk business, so we're not just going to put anything at risk in terms of the capital that we need to pay out on the claims en mass in the industry. So we'll always be a little conservative about it.

Evan Groot (23:25):

Casey, I loved how you mentioned all the different stakeholders there and sort of pull that thread into it because one of the things I've heard, Theisms Insurance is we measure progress in generations. And for most, we have three generations. We have truly cobalt systems that still roam the halls. We have sort of the middle enterprise data lake platforms that are the last couple of years that people are doing and now the agentic and the generative ai. And so there are seriously three generations of innovation happening within an organization, but the need to align around stakeholders and the companies that are executing and executing well, they're able to drive that company alignment. And if you think about the IT conversations, they're much broader. We're now talking to the chief data officer and the chief transformation officer and the chief customer officer and the chief data. And so it's this broader organizational driven, and then it's the bottom stakeholders who can get value. How do we prioritize our portfolio of investments across sales, across different lines of business, across operations? And then how do we make sure that that strategy is sort of comprehensive and the right thing for the whole, so it's not just solving it, but the companies that are able to get those stakeholders aligned. And it's one of the pieces of advice is you need to learn what this means for your organization if you're out there listening to this call and if you're not very far on it, it starts with getting that organizational alignment.

Patricia L. Harman (24:51):

That's a great lead into my next question, and Nikhil, this is for you. How can companies scale their internal knowledge and performance with a agentic AI now?

Nikhil Kansal (25:01):

Yeah, I think that's a great question and I think I'll first start by noting what is different about a agentic AI versus other AI or just generative ai. I think number one is how well it performs with unstructured data. So a lot of traditional machine learning models, they required very technical expertise to clean the data, put it in a format that machines could learn from. Then that's not necessarily true with age agentic ai. It does really, really well with unstructured human-like data, like standard operating procedures and carrier risk appetite guidelines. And the second thing that's different about age agentic AI is how easily accessible it is to everybody in an organization, which speaks to Evan's point about how you have to collaborate across multiple leaders and departments and get multiple stakeholders aligned. So I think with ag agentic, AI scaling internal knowledge and performance is really natural.

(25:53):

If you can have agentic AI ingest your internal knowledge and really distill it down to the key points or the decision tree that you take for any particular operation, then you can scale that knowledge and the performance of your staff by deploying AI holistically across the organization. We've got customers that use a agentic AI to ingest carrier risk appetites and suggest where they might place a particular applicant or risk based on the guidelines based on how that agency likes to do business based on the relationships that they have with wholesalers. And that's really the key to that is its understanding of what is in between the lines and what is present but not explicit. So age agentic AI has a huge potential impact to scale that performance and knowledge based on its inherent ability to understand unstructured data.

Patricia L. Harman (26:48):

That was a great overview. So some industry players contend that there's not a full understanding of the differences between ag agent AI and traditional ai, which Nikhil, you kind of gave us a great overview, but the problem is that this could lead to an improper prioritization and implementation. And do you guys agree with that? Maybe Evan, I'll start with you on that and kind of go down the line here.

Evan Groot (27:14):

And so I think, again, there's a continuum, but a agentic AI is the newest, and I appreciate a Casey's comment of very much when these new technologies are introduced, there's things that are already in flight. So predictive has been around for the better part of a decade. There's trillions of predictions every year in the industry, and people are using that across the value train. And now they're looking at what are the generative use cases? And there's a lot of projects. The last 12 to 18 months, I loved your opening patty, since GPT hit the street, we've been out there looking for ways to drive value using it. And this idea of agentic and the fact that it can do what things can it do? And to IL's comment, one of those things might be due, go find the data that I need to execute, go find the knowledge that I need to, and that is an actual action that the AI is taking rather than just sort of responding with the data set that it has.

(28:06):

So it's constantly evolving. And if I think about the frontier of that, the conversations we're having today and where a lot of the questions are coming from are about to a technologies, so is it one agent to orchestrate multi-agent, right? And what does that look like and how does that work with things like MTP servers and orchestrating backend systems? And so where does the agentic framework start to layer on even once you've understood just basic agentic AI in a single application, how do you think about it more broadly across the organization? And so it's a good, better, never best in my opinion.

Casey Kempton (28:44):

Yeah. I'll just add a thought here on kind of governance as we think about this. So you're right, we've been doing AI in different forms for a long time now, and our traditional governance models, they might focus on performance or ethics, but are they really going to address the needs of autonomous decision making and that agent to agent conversation? And as we've built out these decision engines already, the way that you build that with your teams, with the expertise that you have, the validation criteria that you have, when we've got agents talking to agents, how raveled does it become such that we can't tease out is this accurate anymore with the existing methods that we have? And so when we have robots checking robots who are talking to robots now are so many degrees removed from it, and we've just going to have to have the confidence that that's all aligned the way that it needs to be. And so I believe conservatively our governance will include checkers, checking the checkers for some time until we understand how it's learning and what the long-term ramifications are, et cetera. And it's that governance piece that I think we're kind of a long way from totally having nailed. We've got to learn our way through that.

Nikhil Kansal (30:01):

And just to add on to that, I think it's really important to not anthropomorphize AI and remember that it is ultimately a mathematical approximation of intelligence. And so it needs human in the loop. It doesn't have emotional empathy. It doesn't really understand people and their needs. So there's always a risk of letting AI agents do whatever they want without oversight. And having a governance model, like you mentioned, Casey is important. Having human in the loop is important. Making sure that the objectives of the AI are aligned with our personal values and culture and goals that we want to drive in an organization is really important.

Patricia L. Harman (30:40):

All great points, everybody. Thank you. Nikhil, how will the role of insurance agents shift with the adoption of agentic AI now?

Nikhil Kansal (30:50):

Yeah, that's a great question. And I think if you look historically, the role of insurance agents has been shifting gradually, but it has always kind of been focused on providing value and support to the client, whether that's advocating for them, finding the right coverage for them, explaining different trade-offs, and helping them make those risk decisions. And I think the way, what's really gotten in the way of that historically is how much work is really required to achieve that outcome, whether that's going out to shop, whether that's reading and comparing and writing documents and writing emails. A lot of this work gets in between the ultimate goal of what the agent is supposed to do, which is be that trusted advisor for their client. And I think ag agentic AI is what's going to bring that back into the focus for insurance agencies and agents being able to delegate certain manual and repetitive work. Of course, you're still going to check it and make sure it's correct, allows the agent to then focus on building those relationships, being that trusted advisor, scaling their knowledge and operations, their empathy and their values, the care that they bring for their clients to more people. And that's really how I see the role of an insurance agent shifting less so focused on the mundane day-to-day, comparing documents, reading and writing, and more so on the critical thinking, the analysis, the emotional connection, and the trust that's built between a client and their agent.

Patricia L. Harman (32:18):

Casey and Evan, do either of you have anything you want to add to that? No. Okay. So to position insurers for agent specific readiness, do your agentic AI considerations include discussions around things like autonomy and control to safely delegate decisions? Are you determining maybe where human oversight would be required? And then with the results that you get, are those going to be understandable to whoever's using ai? And then I'll also ask about regulations that might limit some of these, the use of some of these autonomous agents and how the industry is responding to that. And I'm sorry, Casey, that's a lot to put into a question. I'm going to feel that to you first. Sure.

Casey Kempton (33:11):

Well, as we've been talking about agentic, AI is really centered around the ability to act independently, make decisions, take proactive actions and achieve a specific goal with minimal human intervention. But as we've discussed, there has to be a careful balance that emphasizes human oversight and especially where it matters most. So thinking about safe delegation, setting clear boundaries for the AI's decision-making abilities, and then implementing things like confidence thresholds to route to a human for validation. When confidence falls below a certain level and adopting safeguards for all agents, the human in the loop or human oversight is incredibly important. It'll be required for any complex or high value decisions where accuracy is absolutely paramount. And as you talked about, Nikhil, ensuring empathy and compliance and making sure that humans are final in the approval process is going to be critical. I think with user experience, AI agents will always be able to bring a human into the conversation when needed or allow the user to opt out of the AI experience.

(34:21):

We've been living with the online chatbots for a really long time, and you can always eventually get to someone to answer your question, and we've all sort of figured out when the human is in the loop and when they are not, but that's always got to be there. Confidence scores can again determine when a human is automatically brought in. So now we're going to build models to regulate the AI based on experience and observations to help us with that governance. When it comes to regulations, this is a space that is also constantly evolving and with each state having its own sort of insurance regulator and then other regulations around data and privacy and how companies compete in a market and on websites and where we get explicit permission or implied permission, there is no single standard around all of that now. And so where we're going to see regulation evolve relative to ai, and not just the data access, but also if it was an AI decision, do I have to communicate it was an AI decision? Does there need to be some kind of an appeal process for that? How much explanation will we have to do on the underlying sort of models that have informed the decision? I think that is just a real kind of gray area right now for us to truly evaluate when the regulation was sort of catch up, particularly with regard to insurance.

Evan Groot (35:52):

Yeah, I think that's well said and a big question, so thank you for taking the lead on that one. I think that if I had to boil it all down to a word with regulation, it comes down to trust, and trust needs to be the number one value, but it's been never more important than when you talk about AI and how do you build that trust? And so it's masking the nature sensitive data isn't getting in there and training the models. It's being able to test responses for toxicity and bias and take corrective action. It's about having an audit trail that's visible both for regulators to look at, but internally to make sure it's consistent with the brand message and values and what the organization wants to have. And so a good AI strategy needs to have all of those components both to manage the reputational risk, the internal challenge to make sure it's doing its right thing for its customer, but also ultimately to provide that to regulation.

(36:43):

And insurance is interesting, right? Regulation comes in so many different forms. There's data regulation and community, and now there's some federal conversation about maybe not having more state-based. I don't know where all that will ultimately land, but what's important for an organization to consider is what are the building blocks of trust in your organization? Are you masking? Are you protecting the information, the data? How are you having the audit trail and toxicity, right? Detection and how do you manage that? And then how can you demonstrate those controls if you need to a regulator? And so ultimately, that builds the trust, and as trust goes, so does adoption, right? Both internal adoption and external adoption. So all stakeholders involved, whether it be regulators, the industry carriers, the consumers themselves benefit from a transparent arrangement.

Patricia L. Harman (37:38):

What efforts are insurers taking now to ensure the successful scaling of agent AI across their enterprises? And I'm kind of wondering where the industry or your companies are in terms of data foundations and creating high quality accessible data pipelines or agent governance and creating policies for oversight and identifying and escalating problems and auditing the outcomes or even change management where you're creating opportunities for training and adoption within the company to help upskill and reskill employees. So I'll start with you on that one, Casey, if you don't mind.

Casey Kempton (38:25):

Yeah. When we think about data foundations creating these high quality accessible data pipelines for both nationwide and the industry, AI ready data is and will continue to be the key to unlocking value here. Our industry data is really rich, but we have to move quickly to be able to ingest and leverage it in the way that we can build highly effective AI agents. When we think about integration and architecture, we all now have a lot of experience with APIs building API platforms, there's a lot of connectivity that goes on behind the covers all through APIs and headless tools with orchestration technology that is really critical. We feel like we're in pretty good shape around that to be determined what complexity this will introduce into that world. We talked about it a little bit with regard to governance. That human in the loop is just going to continue to be really, really important.

(39:27):

And then leveraging what we already have in terms of the strength of our change management and techniques and bringing in our associates early, creating that familiarity and driving through that change I think is really, really essential. We think AI can help everybody and how they do their work in some way big and small, and learning into that to start helps us even as we think about externalizing, where some of these agent ais will be applied. So scale first inside and then figure out how you do that on mass with some of what you deliver externally.

Patricia L. Harman (40:09):

Evan, anything you want to add on that?

Evan Groot (40:11):

I think great advice. I think your point about data is incredibly important, right? As you shift from predictive to generative to agentic, the more important the data becomes. And there are rich API infrastructures behind it, but the AI conversation inevitably drives the data conversation. We're seeing some really interesting innovation in the space. So virtualized data, how to use data for AI that isn't necessarily moved into the AI tool. Really interesting concept that didn't exist and wasn't really talked about. Things like retrieval, augmented generation, so pointing a prompt at a specific part of a segment, maybe an underwriting or compliance or a risk appetite, and then grounding the prompt in that piece of information. And so that's where you see those two worlds of data and AI come together because the industry's going to move faster. The other thing I think is you just from a change management, a cultural thing.

(41:04):

So what is the cultural change that starts internally that you can control and making AI part of the entire day? One of the interesting things we did is just our annual training. So this year we have 70,000 people that we need to do a corporate certification that used to be train the trainer and Cascade and take weeks. We did that in three days where we each presented to ai. It evaluated, gave real time feedback and allowed you to do it. And so you just think about driving an AI culture internally and then cascading that out, I think is really well said. Casey, to the external stakeholders.

Patricia L. Harman (41:39):

So Nikhil, I have a question for you. How can insurers mitigate their E&O exposures by automating documentation and maintaining data hygiene as they incorporate agent AI into their processes?

Nikhil Kansal (41:56):

I think that's a really great question, and I think when it comes to agentic ai, E&O is really a double-edged sword. On one hand, you can imagine that agentic AI could actually help you mitigate ENO exposures by doing some of the double checking, even if it's 90% accurate and you catch one error, that's one EO exposure that you've avoided. On the other hand, if you use it incorrectly or without the correct guardrails, it could introduce new E&O exposures. So being hallucinations are a common term in this technology. If the prompt or the AI isn't sufficiently grounded, then it could introduce errors into the documentation and the data that it tracks. So it's really important to evaluate the AI that you're using and make sure that as your data changes, your AI also updates and changes. It makes sure that you have a continuous pipeline where you're measuring the inputs and the outputs and that you're confident that the AI is performing as it should.

(42:57):

But ultimately, I think the newest, that risk of introducing ENO exposure from AI has existed for quite a while. But I think what is new is its ability to mitigate e and o exposure. A lot of our agencies, our customers are using agent AI to crosscheck certificates of insurance that they've generated against the policies on which they're based to see if there are any errors. And having a very paranoid AI will sometimes say that, Hey, this is an e and o exposure. When it's not, it's up to the agent to double check that. But that is the trade-off that you make when you want it to catch more errors than to Let's slip.

Patricia L. Harman (43:37):

Okay. Thank you. We are right up against our time here. We had a question from the audience. I'm going to throw it out and maybe one of the three of you would be willing to answer this. Could you speak to the difference between AI, copilots and AI agents in terms of operating autonomously and the need for a human in the loop? Anybody willing to take that? Maybe Nikhil, since you have a lot of background in some of this?

Nikhil Kansal (44:03):

Yeah, I can take that question. And it's a really great question because so much, there's a lot of nuance. There's a lot of overlap between agentic AI and a copilot. But at a high level, you can think of a copilot as an AI that you are going back and forth with, working with in tandem to accomplish a goal and age. Agentic AI is a system that operates independently and autonomously with oversight. So an example is in ag agentic ai, you may give it a task and say, Hey, research this, put this report together for me and then make sure that it's available in X, Y, Z system. Whereas in copilot, you might take that one step at a time and say, okay, search the web for this. Look at the results. Then you ask a follow up question, say, okay, summarize it to be in this format. That's really how the workflows differ between a copilot and agentic ai. Ultimately, they're based on the same intelligence. It's just a matter of how many systems they integrate and how autonomous they are.

Evan Groot (45:01):

Alright. You'll just give one example, right? It comes up all the time in sales, right? So you have a wholesaler or a sales manager, a territory manager calling on an agency. In a B2B scenario, they largely are going to record that meeting. And so when I think about generative, they can record the meeting, they can do a summary and they can put that in an unstructured field perhaps. And agent use case kind of tied to that is you're going to do the same thing. You're going to summarize it, but it's going to automatically go and create follow-up tasks and opportunities and leads and send distributed marketing and actually go take action on its behalf. And so it's taking that extra step to reason to decide to do something and ultimately invoke and take an action. And both cases, a human is sort of entwined with it, but it is that next step of the evolution.

Patricia L. Harman (45:52):

Alright, great. Well thank you so much everybody for joining us. We really appreciate Casey Nikhil and Evan given us a very practical look at how carriers can implement agentic ai. Now we're going to take a quick break and we'll be back at 12.55 for our conversation with David Vanalek to discuss Agentic AI and claims underwriting and other areas of insurance. Thanks. We'll see you all back here in just a few minutes.