Transcription:
Patricia L. Harman (00:14):
Welcome back to the Digital Insurance Virtual Summit, agent AI and insurance, the next wave of enterprise transformation. Now we're going to have a great conversation with David T. Vanalek, Senior Vice President and Chief Legal Compliance Officer at Richmond National Insurance, and we're going to be discussing the use of a agentic AI in claims and underwriting. Thank you so much for joining our conversation, David.
David T. Vanalek (00:40):
Hello, happy to be here. And thank you for inviting me. Definitely appreciate it.
Patricia L. Harman (00:44):
So agentic AI holds a lot of promise and offers multiple benefits such as lower costs, faster service, greater operational efficiency, and even more adaptive and customized insurance products. And personalization is a really important aspect when we talk about its possibilities. AI agents are continuously interacting with customers and delivering personalized and engagement, and this often includes tailoring their office offers or policies based on evolving customer needs and life events. How optimistic are you about the potential of agentic ai and how quickly do you see some of these capabilities unfolding?
David T. Vanalek (01:32):
Yeah, it's a great question, Patty, and yeah, from my perspective, perhaps I can share with the audience how I'm viewing this particular topic. I serve as the Chief Legal and Compliance Officer, so I oversee all regulatory compliance and legal matters for a carrier. And as far as a carrier, Richmond National, we are a specialty commercial lines carrier in the surplus lines industry. And so just so everyone's aware, the largest portion of the carrier market is admitted carriers where carriers are licensed in a particular jurisdiction. Their form sets and their rates are approved by the licensed regulator in that particular jurisdiction in the surplus line space that is viewed as the safety valve of the overall insurance industry. And the reason why I mentioned that, Patty, is we have what's called the freedom of form and rate in our part of the insurance industry, which means that by the time a particular submission comes to us and through our distribution channel partners, there isn't a whole lot of time, particularly left before policies need to be bound and there may need to be some significant customization or personalization.
(02:59):
It was, I think, the term you used to help really craft that policy to meet the needs of the policy holder at a price that makes sense for them. And so when you shared with me this particular question on personalization, my immediate thought was for the surplus line space in the industry, I can definitely see some cautious optimism for various agentic AI tools to be utilized in such a fashion to really be much more efficient and creative and helping policy holders at the end of the day get the type of coverage that they need for the price that they needed in the surplus line space.
Patricia L. Harman (03:41):
And that's really an important aspect of that for them too. So claims is one of the key areas where we've seen the adoption of ai, and it's an area that I've been following for a number of years as agentic AI evolves. Do you see where AI agents could orchestrate the entire claims workflow, whether it's from first notice of loss to identifying whether or not there's fraud involved with the claim damage assessment, adjudication and payment?
David T. Vanalek (04:14):
Yeah, there's a lot of pieces in that particular question.
Patricia L. Harman (04:17):
Yes.
David T. Vanalek (04:19):
And again, the lens through which I approach this particular issue, there are carriers who basically write personal lines, policies, auto, homeowners, renters insurance, what have you. I'm not going to speak to that part of the industry where perhaps some of these tools may provide differing levels of opportunity. From my perspective in commercial lines, commercial lines policies sold to small to mid-sized businesses, those tend to be more often than not third party claims where a third party is bringing some type of claim or perhaps it's a lawsuit against one of our policy holders. And then the question becomes, in that particular setting, in that particular environment, are there opportunities for the use of ag agentic AI to help the adjudication process? I certainly do not see where such tools could be used end to end from first notice of loss through adjudication and resolution. And again, maybe it's just my bias from being a former practicing defense attorney and litigator in San Francisco in Chicago for many years ago, but I certainly see areas along the claim life cycle where tools could be used to help adjudicate the process more efficiently, particularly as you mentioned on the front end, during the claims intake, first notice of loss process, helping potentially verify that there's a policy, et cetera, and providing some solutions in that particular space so that the policy holder gets met where they need to be met and communicated with that proper level of empathy by a human adjuster in that particular space.
(06:16):
I think of the situation where most folks are not typically sued every day, right? And if they have a process server showing up at their front door, the fear starts to elevate a little bit and they want to talk to somebody who knows what they're doing to help them navigate the situation. So unfortunately, until there's a Gentech AI attorney out there and who knows, maybe that's around the corner, they're going to want to at least have some level of empathy from an examiner adjuster who can navigate that situation the best for them, work with outside counsel in steering their way through the adjudication process. And yes, excuse me, there might be some opportunities on maybe some general correspondence utilizing a gen ed eye to facilitate more consistent communication, not on that particular claim, but across claims as well. And then lastly, on the very back end when it comes to after there's been an adjudication or resolution, finalizing some of those loose ends, whether it's making some final payments, some invoicing, that sort of thing, and really providing some expedited clarity there so that the overall process is really more focused in that middle area, which is where I think if you have a strong claim operating model and you have folks with the proper level of expertise at meeting the claim and the claimant and the insured where they need to be met, I think that's what I would envision tools like this would be most useful.
Patricia L. Harman (08:04):
I agree, and I'm thinking especially with so many folks in the insurance industry retiring over the next couple of years, I think that we will see a larger use case for agentic AI and other types of AI to handle some of these responsibilities. And that'll be an interesting transformation, I think, to watch. So we've been talking about using agen AI in claims, and I'm wondering if there are any legal issues that could arise with its use as part of the claims process for carriers, because there are some states like Texas where the regulations state that anyone who investigates or adjusts a loss on behalf of an insurer has to have a license. And then I think that introduces a very interesting question of does that mean that we'd have to figure out how to give an AI program an adjuster's license so that it could be in compliance? Because we know that both, well, you'll know from being in the legal industry and also from government regulation, everything tends to lag just a little bit. So,
David T. Vanalek (09:20):
Absolutely. Yeah, sometimes it continues to lag even decades later. I think what's important about that particular scenario that you proposed is recognizing, and again, I'm speaking on behalf of a carrier that operates solely within the us. There are going to be other potential laws and international frameworks that come into play for those who operate on a more global scale. But I certainly would say that there's a strong recognition within the US that a lot of these frameworks and regulatory compliance issues are state-based. And that's been that way for a long time now. And I think you are seeing to some degree, I'd say a little bit of a tension Now when it comes to implementing AI tools. For example, in the US you mentioned Texas has that particular statute, and there are a couple of states that have implemented more broad, comprehensive AI type frameworks. Colorado comes to mind, I think their effective date for implementation goes into effect February of next year. California has come out with some things, Texas actually just came out not more than a few weeks ago actually with, what was it called?
(10:57):
It was like the Texas Responsible Artificial Intelligence Governance Act. And so there are these frameworks that have come out just a little bit more broadly, but in the insurance space, the framework that's been top of mind for many in the industry is this model bulletin that the NAIC, the National Association of Insurance Commissioners promulgated back in December of 2023. And it really laid out what the legal framework that sat behind it, the expectations of each state as to what the corporate governance framework would look like, how to fold in AI framework or a IS framework into your enterprise risk management framework, maybe also including some internal audit functions, that sort of thing, inventorying the tools that you have auditing against those particular tools. And so again, very, very state-based. And then I mentioned this tension at the federal level. It was about two weeks ago, maybe three weeks ago now, it actually was July 4th that the one big beautiful act was signed into law. But there was a brief moment there in the two weeks proceeding that signing where there was a proposed 10 year moratorium, I dunno if you've heard about this, where in that bill they were proposing to limit the enforcement of the states in limiting AI regulation.
Patricia L. Harman (12:36):
Oh, yes, I do remember reading about that.
David T. Vanalek (12:39):
And that caught the attention that came out of the house. And when the Senate looked at it, they eventually pulled that out of the bill, but that caught a lot of people's attention because then that would've not created necessarily a federal uniform standard when it comes to an AI regulatory framework, but it would've just prohibited each individual state from moving forward with their own. And now I'm sure there would've been constitutional legal challenges all over the place associated with that. But it is very interesting to, as all of us in the industry to try to develop a proper framework for the respective companies within which we work in order to provide that safe, reliable, ethical, transparent environment, but recognizing that there's this inherent tension that's taking place right now, at least at the federal and state level.
Patricia L. Harman (13:37):
Yeah, very true. I had forgotten about that one decision or policy that they were going to put in there in terms of risk assessment and underwriting. Agentic AI can collect and analyze external and internal data streams like IOT, social media, satellite imagery. That's definitely one of its strengths to help refine risk models in real time. How substantially would this affect the business if agents could dynamically adjust their underwriting guidelines and pricing is needed? And I'm thinking about this in terms of like you're talking about being in the commercial space. If you're working with a fleet operator or you're working with someone that has large commercial exposures, that's an interesting opportunity there for them to use agentic AI.
David T. Vanalek (14:37):
Right. And again, being in the surplus line space, again, we have the benefit of being able to adjust the pricing, adjust the coverage terms and any limitations or what have you, to really customize that solution. Whereas in the admitted carrier space, they may know what the solution is, they may be able to identify that, but they may not be able to act upon it as quickly unfortunately. So I think when it comes to underwriting in particular, there are some opportunities for that dynamic pricing as you're mentioning. And again, I think we still have, I'd say some ways to go with this too. I don't want to be sitting here opining that we're already here. No, but I think there are opportunities to provide better information. And again, it's all based upon the quality of the data that's coming in to feed that analysis as well.
(15:53):
And at the end of the day, frameworks set forth in the NAIC AI model bulletin. They do talk about a human in the loop making the decision ultimately at the end of the day. So in other words, it's a decisional support tool, but it's not making the decision to underwrite. So it's providing another data point, another element, another analysis, and again, again in the commercial line space, additional information that helps facilitate the analysis. Yes, there's going to be close and there should be close scrutiny by the regulator to ensure that there isn't any, and we'll talk about this later, I'm sure, any kind of implicit or inherent bias, that sort of thing, because at the end of the day, you're trying to find that best solution for the policy holder in doing so. But at the same time, balancing that with the needs of not overlying upon the information or outcome provided by that particular tool.
Patricia L. Harman (17:05):
I like the differentiation that you said that it is a tool, and I think that sometimes people forget that it's just like when the internet came out and Google and all of these and everyone was talking about how it would change life. Well, yes, it did. But they were tools. And I think about me as a journalist, the ability to google information when I'm working on a story is so much easier than having to go someplace and do some of the research. So we keep seeing this evolution and it's improving some things and maybe making us take pause and check some of the information that comes out too at the same time.
David T. Vanalek (17:49):
Oh, yeah. Yeah. The example I always raise, again, work with various law firms and work in the legal industry. And the one example I always bring up, especially in that particular setting, it's probably not applicable to the audience here today, but there was a time when practicing law in order to verify that the case law that you were reading in support of a particular argument, in support of your position in some type of dispositive motion, you needed what was called shepherd eyes the case. So you'd pull this little volume off the shelf and you'd read to see, okay, what cases address that case and verify it there. And then they had little supplements, these little blue sheets and pink sheets where you were actually by paper just verifying reading those cases, seeing how it treated that so that you knew that you had the best case law supporting your position.
(18:48):
And when Lexus and Westlaw came out with shepherdizing and you literally clicked a button and then it just showed you immediately the legal industry was losing its mind at the time, they were very much thinking, oh gosh, how could this possibly be correct? And it was a matter of, well, okay, yes, maybe it was incorrect in some instances at the beginning, but over time and over the years, it got better and better to where now folks rely upon that as the authority. And so I do see, as with any tool you need to verify and validate, especially in its infancy and agentic ai, I think in some circles is still definitely in its infancy, but there may come a time where we do become more reliant on it, and we do have more trust behind some of these tools.
Patricia L. Harman (19:42):
I agree. As it learns more, and I almost want to say just wait a week or two because it's been evolving so quickly, it's moving almost like at the speed of light. So we talked some about regulation and regulation is definitely a headache for financial institutions and insurance companies. It's possible that autonomous agents, I guess, could continuously track regulatory updates and check for compliance with policies and flag discrepancies. What do you think the net impact of this kind of potential could be on insurance companies then? And well, I'm anxious to hear what you say. I've had conversations with carriers about this. So
David T. Vanalek (20:24):
Yeah, no, ideally I would say that if some of these potential tools at some point, if it's able to truly assess changing regulations, the adoption of new statutes, that sort of thing, if there's a framework within which it's viewing that particular body going, recognizing that, for example, if you're a carrier who's in the admitted market, that these statutes per state applied to admitted carriers and therefore would be applicable to you. And then doing the next level of analysis of, okay, these are the lines of business and the products that we sell, and therefore filtering it down even further. I do see that there are opportunities to at least do I say, kind of flag the issues a little bit more and filter those issues that are more pertinent and applicable to carriers based upon the business that they're writing in the regions or jurisdictions in which they practice. And I do see that there's some potential opportunities there.
(21:42):
On the flip side, because a lot of bulletins or regulations or even statutes, sometimes it's in what's called the gray area as to whether it applies or not. I do have concerns that if a trained person in that particular industry were to look at it and they go, yeah, maybe they didn't come out and quite say it, but they certainly meant for this to apply to this part of the industry. I can see agenda AI may be missing some of that potentially because it doesn't recognize the, and it's being a little bit more linear in its approach, but who knows it's possible that it could eventually maybe even snare additional things that maybe internal teams are not necessarily recognizing as something that might be applicable and that they should be considering operationalizing into their internal policies and procedures. And I know that was one part of your question too, was are there ways to utilize agentic AI to improve internal compliance? I do see that as a potential opportunity. Again, if you've got a strong set of internal processes and controls that are well documented and you've got a good tool set that you're utilizing, I could see some benefits there. If you do not have good internal controls or processes and workflow, genetic AI is not going to improve that situation. You have to start out with having a good operating model with your people, with your teams, good processes, good control, good documentation, and then provide that overlay to see if there are any benefits.
Patricia L. Harman (23:36):
Yeah, I agree. When I was asking you this question, I was thinking, because I've had conversations with a number of carriers and in terms of tracking deadlines and from a regulatory perspective and making sure that they hit those, whether it's for health insurance or workers' comp or something like that, where you have so many days to do this, I think in that respect, that agen AI could be very helpful and even better than just the reminder that pops up on a calendar type of thing. So I can see that developing going forward at some point.
David T. Vanalek (24:16):
And certainly going back to the different parts of the industry, the admitted carriers, they have a lot more deadlines. And it's not that we're not regulated in the surplus line space, we certainly are. It's just we're less regulated with respect to some of those filings in each jurisdiction. So,
Patricia L. Harman (24:41):
Agentic AI decisions such as a denial of claims or pricing recommendations, if we use agentic AI and those pop up, I guess they could trigger maybe some transparency or explainability issues for insurers. Can you think of ways or how could carriers mitigate these risks while they move forward with their adoption of ai?
David T. Vanalek (25:06):
Yeah, with those particular examples, again, I always go back to at the end of the day, it needs to be the person at the desk on the line that's making the decision or the call, especially on a claim denial like you were mentioning, because that's a very strong position that's being taken, and it has to be the right one bottom line because when the claim presents itself, that person is potentially vulnerable. It's the first time they've been involved in that type of a situation. And if they're going to get that kind of response, it better be the right one kind of thing. And so again, with respect to these types of tools, again, I view it as a recommendation, a suggestion as a, Hey, this appears to be a comparable type analysis to other claims that have been handled or other pricing decisions that have been made. But at the end of the day, is that the right answer in this particular situation? And if the answer is no, then the answer is no.
(26:27):
I know earlier you were kind of mentioning or asking about the licensing question, and I'm certainly noticing various stories and not just in the insurance industry of where it's really the entity or the individual with the license. At the end of the day, that is responsible, again, hearkening back to the law firm examples. I'm sure many of maybe you've seen Patty and maybe some of your audience members here have seen countless examples of attorneys putting their signature legal briefs that have cited to fake cases in support of their position. When it's brought to the judge's attention, the judges are understandably not pleased at all, and they sanction the attorneys. A lot of the sanctions are coming in between one and $5,000 a time. And it's really because under the rules of professional conduct your name, when you sign that brief, you are verifying that you have read it, that it is accurate, that you as an officer of the court are verifying that this is the right answer basically. And so going back to your claim example, claim examiners, they have their individual adjuster licenses in each of the states in which they adjudicate claims it is their license that's on the line. So that's why I always view it, or I do view it at least currently, that responsibility really does lie with those who hold the license and understandably so, because the regulatory body is putting the faith in that particular licensed professional to do what's right to protect the citizens of that state. Does that make sense?
Patricia L. Harman (28:15):
Oh, it makes total sense because thinking there have been stories in the media about different news organizations using AI to write reviews on something or write articles about something and nobody bothered to fact check it and make sure that those places really existed or that that information was correct. So as a journalist, I'm thinking, how could you ever publish something that you didn't bother to check and read through first? And it's the same as what you were saying. If you hold the license, that's your responsibility is to check over that claim or whatever aspect of that's involved. So
(28:56):
I can totally see that. So let's shift gears a little bit and talk a little bit about data privacy and consent. And autonomous agents can definitely ingest and process large volumes of personal and sensitive data. And if companies have not gotten the proper consent, or maybe they're improperly combining data sets, sometimes that creates legal exposure. And as insurers integrate a agentic AI into their organizations, where on the list of priorities should this consideration fall? Because it really kind of falls into areas of being ethical, part legal and part technical because of the technology aspect of it and then trying to protect this information. And do you see this as maybe one of the larger challenges that carriers will have to address?
David T. Vanalek (29:54):
Yeah, I certainly do because it's one of those areas that falls in between other different pockets. And when it comes to privacy and data security as well, you're seeing the regulators in various states are certainly keenly interested in setting up particular frameworks to protect the privacy of their citizens. And from a company perspective, trying to identify what those various frameworks look like and then how to create an approach that addresses it. And then recognizing that the data set is being properly utilized, that you have an opportunity for what you've got is clear consent processes associated with that. I think there's a lot of work to be done in that particular space from a prioritization perspective. Hearkening back to that bulletin I was sharing with you and just as an FYI, when the NAIC promulgated that model bulletin back in December of 23, it wasn't so executed, each individual state basically examined it and identified whether they were going to adopt it or maybe have a slight variation to it, what have you.
(31:24):
And I think currently as of maybe earlier this month, I think there were 24 states in Washington DC have adopted this particular framework. And it really talks about building out an artificial intelligence system program, part of which creates this corporate governance structure. And I think this particular topic is very much top of mind within that particular framework of identifying, okay, how do you safely and securely recognize what the different requirements are in minimizing any challenges associated with that, especially since, as you mentioned at the outset of the session Patty, that agen AI says unilaterally, but it can kind of go into different areas, gather the inputs, reason an assessment, and then provide some type of actionable output based on that. And we really want to be very, very careful associated with that because it's not just states like California with the California Consumer Privacy Act and others. I mean, there were been several states that have promulgated variations of various privacy laws, and I know that the industry insurance industry itself continues to assess that particular framework. And how does it move hand in hand with some of these other principles when it comes to AI tools and usage?
Patricia L. Harman (32:59):
Well, it's a little bit scary. You think about once that information is out, there is no way to pull it back. And it reminds me of things that people post on social media and they never think of, well, this could go viral. And even though I've just posted it to this small group of people, all of a sudden the whole world knows about that. And because of the information that carriers collect and have access to protecting that data just becomes that much more important.
David T. Vanalek (33:29):
And I think you saw that when ChatGPT really exploded on the same back in November of 22, and there was maybe like a six to nine month period there where various companies, I mean, it was just a hard stop. Remember, no one can use it. And it was, well, why? Well, because some employees were just putting customer information into an open system, which was then training its model based on that particular confidential data. So yeah, to your point, privacy is paramount. And a lot of companies said, okay, alright, let's figure out ways to utilize such tools in a safer environment, a more containerized environment, one that we can control, making sure we've got a proper framework around that, and then utilizing set things or those tools in such a way that we are being cognizant in recognizing those privacy laws.
Patricia L. Harman (34:27):
Yeah, very true. And then that's where AI could be helpful in terms of tracking all of those different privacy laws. It kind of creates this vicious cycle. So there are also concerns that extended the possibility that fairness could be compromised because bias is introduced by AI agents, agentic AI can unintentionally encode or maybe amplify discriminatory biases in underwriting or claims decisions. Since regulations increasingly require bias audits and compliance with fair lending and insurance practices, can the potential benefits be realized while also kind of mitigating some of these heightened risks?
David T. Vanalek (35:16):
Yeah, this is a particularly hot topic, and at the end of the day was I think one of the key drivers behind that model bulletin we've been talking about, because that bulletin really was trying to minimize what it called adverse consumer outcomes, where there might be some bias in the underlying algorithm or training model. And it may be because the underlying dataset was biased itself, or maybe it was not a broad representative sample. It was trained on a much more limited dataset and therefore had some inherent biases because it just didn't have the full spectrum of information to assess against. And so from that perspective, you're seeing where companies are inventorying the tools, tracking what are the outcomes that are associated with the recommendations or suggestions from these particular tools, and then conducting periodic bias audits against that. One of the things that we saw back at the end of April, and I think there was maybe just a few articles about this, but there was this really interesting executive order that had come out on April 23rd, 2025, and it was called something like the Restoring Equality and Opportunity and Meritocracy or something along those lines.
(36:49):
But long story short, is it focused on disparate impact analysis and basically was directing the attorney general to if there was kind of the utilization of disparate impact analysis in any federal programs and that sort of thing. It was basically saying that needed to cease. But then it also, there was a second piece to that directing the attorney general to identify state level frameworks, regulations, laws that utilized a disparate impact analysis and identify if there were any, what are called constitutional infirmities in those state laws. And so in looking at that, I don't know if there's been much discussion out there in the industry, but that immediately what came to mind was everything we've been talking about and whether there may be some focus or kind of I'd say change in how different states are approaching that particular approach. Again, the model bulletin is based on various laws that are set forth in each state, like the Unfair Trade Practices Act, the Unfair Claim, settlement Practices Act, things like that. But it didn't promulgate a new statutory scheme, but I think it was very good principled guidance for companies to identify and see what was available. But again, going back to that tension between the federal and state level, I do feel that this particular topic and how companies conduct themselves when it comes to periodic bias audits, that sort of thing, I think it's going to be interesting to see what happens in the next 6, 9, 12 months with some of these things sitting out there. Yeah,
Patricia L. Harman (38:45):
I agree. That's one of the things that our editorial team has been watching and trying to cover to make sure that companies are aware that we're using all of these new technologies. These are some of the things that you need to watch as you're implementing that, and these are just really important factors to take into consideration.
David T. Vanalek (39:03):
Right, right. Yeah. One question I typically hear is, since there isn't what I call settled law on the topic, how do companies engage? It's not like you can wait until things become rather refined and clarified, because to your point, Patty, it could literally be years before the law or the statutes or the regulations kind of catch up to where we are right now. So it's a matter of trying to identify maybe on a more principles-based approach where various regulators or legal authorities may view things and how they're approaching it and build your operating model around that particular framework. Yeah, I think that's how folks are looking at it.
Patricia L. Harman (39:56):
That makes sense. Definitely. So all of these risks present accountability and liability issues. And at this point, I don't know that we have a clear path on how to resolve them. So what happens if an AI agent makes an erroneous decision or takes an unauthorized action? Who's liable for that? And I'm thinking because it does have the ability to be autonomous, how do you even track back, well, this is who made that decision, it was a machine or a program as opposed to an individual? Where does the liability fall then?
David T. Vanalek (40:40):
Yeah, yeah, that's a great question. And again, it goes back to my earlier comments of at least through the lens in which the lines of business that I oversee, that sort of thing. Again, it's ultimately the person with the license, the licensed claim adjuster or the attorney who basically is responsible at the end of the day or the licensed company, is ultimately responsible. And again, that's because it's not necessarily making a decision, it's not the ultimate decision, but it's providing a suggestion or recommendation, which is another data point for that individual or a group of individuals to collectively assess and make a decision on. But for other areas, that's a fascinating question. Is it the developer or is it the deployer of the technology? And you're certainly seeing a distinction between developers and deployers in some of the more recent statutory schemes on the issue in Colorado, for example, their AI act, I mean, it distinctly talks about high risk systems and has certain requirements of deployers and certain requirements of developers, and you're seeing, I'd say that general model being followed in other areas or other states as well. So I'm curious if that body of the law will develop over time and you'll see a shift in liability. But I think again, currently I think it's, when I say status quo, it's the person whose name is signed at the bottom of whatever is I think ultimately making the call. So,
Patricia L. Harman (42:28):
Well, it's like the questions that arise where if you have an autonomous car and it's driving, you're in the autonomous mode. When it's in an accident. Again, it's that same idea. It becomes man versus the machine, and where does the liability fall? And I agree, I think it's going to be a really interesting aspect to watch going forward.
David T. Vanalek (42:48):
And even in that particular space, it's still developing, but a lot of those initial lawsuits, it was, well, it's the license of the driver. It goes back to who is ultimately the licensed individual who's responsible. So.
Patricia L. Harman (43:01):
Very true. Yeah. Thank you very much, David. We've reached the end of our session, so we're going to take a quick break for our audience and we will be back at 1.50 with our conversation with Jamie Warner. So I hope you all have enjoyed the conversation so far. David, thank you so much for sharing your insights and giving us a lot to think about, especially in multiple different areas, and giving a legal perspective too, in terms of some of the things to keep in mind with agentic AI.
David T. Vanalek (43:34):
Oh, absolutely. Thank you, Patty. Thank you for having me. And thanks to your team at Digital Insurance here. This has been wonderful. Thanks.
Richmond National's David Vanalek on Agentic AI's Great Promise
July 21, 2025 5:36 PM
43:48