AI for life insurance customer experience: Part two

A person using a keyboard with several monitors around.
Adobe Stock

Takeaways

  • The AI conversation isn't about adoption anymore but acceleration.
  • AI can enable insurance companies to help customers in new ways.
  • Insurers with legacy systems designed for humans to use, not for AI, could have challenges.

Mindy Chen, Mutual of Omaha's vice president of segment analytics, spoke with Digital Insurance about agentic AI and what strategic capabilities insurance companies should be investing in. Chen is working to build out Mutual of Omaha's generative AI capabilities. She shares about the opportunities for the industry to apply technology to improve the customer experience. Part one of this conversation can be found here. 

Responses have been lightly edited for clarity.

Mindy Chen
Mindy Chen

How does knowledge transfer work with tech?

There are tools now, AI-based tools that can go and crawl the code base and the data of legacy systems to try to then pull out information about what these functions are doing, which functions depend on others, produce documentation for teams to be able to reference and also, potentially also pull out business logic. Here's the business logic that I see being implemented in this module, and that helps with modernization, that helps with potentially being able to pull out that business logic and retire a legacy system.

It also helps developers that I knew on the team learn about what's happening in the systems that they have to maintain. AI isn't just good for coding. Part of being good at coding means it's also very good at understanding the code, documenting it, helping us to understand it, as well as writing automated tests against it.

What kind of skills are necessary for people coming into the industry?

People who are coming into my world, my field, building enterprise AI applications. I think that we really benefit when we have someone who is a combination of a data scientist and a full stack developer data scientist, because data scientists have a lot of rigor and intuition around how to work with data, since large language models work on data. How do you detect fairness and bias in AI? How do you test? How do you run experiments? Because large language models, Gen AI as a whole is non deterministic, we have to be able to test at scale to compare. Did this prompt change make everything better, or did it just make a few things better and everything else worse? 

So being able to run those types of experiments are all bread and butter for a data scientist, but full stack development skills are also important, because we are using models as a service. We are integrating them into enterprise ecosystems. We are connecting them to other systems, and that's where we need the expertise of full stack developers who can build applications and integrate AI into these systems. 

I've experimented with having both functions like a data scientist and full stack developers working together, and found that it was not as optimal as having someone who actually just has the experience of both, because the work isn't evenly split. It's not like half of your work is always data scientist and half is full stack developer. It changes and it ranges based on what phase you are on, in terms of the journey of building this application. So, we tend to look for full stack developers who have been data scientists and who have that blended skill, which is very rare. It is hard to find folks who have that skill.

Another thing that I think about when we're hiring people into this field is AI can code a lot better than a lot of us can now. Now we are still needed to make sure that it's coding the right things. But that speaks to the fact that an AI developer's role, any developer's role, is starting to change. If AI can write hundreds of lines of code in a second. How do you know it's building the right thing? Only we can determine if it's building the right thing, and that comes with it, product management skills, stakeholder and customer, user empathy skills. Those become even more important for the development community as a whole.

How does someone develop those necessary skills?

I think it would be my recommendation for folks who are coming out of school to work on your product management skills. Work on understanding the user. There are a lot of people who can code, and AI can code very quickly, but knowing what the right thing is to build that's a critical skill. Being able to work with users is a critical skill. 

There are a lot of developers who are very good at their job, but would not want to even be the one that is talking to the user and trying to figure out they want to work on tickets. We tend to stress to people who are graduating interns, and others that we speak to, to work on those types of skills and make sure that you're positioning yourself as someone that can translate the needs you're observing in the business into the technology solutions. And this is actually quite important, because you can't just rely on your business stakeholders, your users, to tell you what to build. 

We don't rely on our users to tell us what to build, since most of them aren't familiar enough with what's possible to be able to come up with that solution. So we observe what they do. We put ourselves in their shoes. We live alongside them. Try to understand everything that they're facing.

What strategic capabilities should insurers invest in when it comes to technology?

I think probably all insurers face the same challenge with legacy systems. And I say this to folks a lot, AI can't help you if it can't access the same data you do. 

So, insurers that are facing legacy systems that are designed for humans to use, not for AI, will have a lot of challenges, like information may be stored in PDFs for a human to read, but it's not an XML or JSON that AI needs to access. Applications have user interfaces that people can use, but not like a full set of APIs for AI to use, and there are definitely ways to overcome that. So there's vision models that can read PDFs. There are LLMs that can look at the same screen you're looking at, but those are not as optimal right now. They're not as speedy. They are maybe not as cost effective. Foundationally, I would lean more towards how do I build source systems, operational systems that enable AI access, instead of trying to work around that, because being able to have systems that anticipate this kind of a need is probably one of the most important ways to unlock the power of AI across an enterprise.

How do you advocate for tech in a budget?

I think awareness is probably the first step, because many teams who are maintaining legacy systems today, they never had a need to keep anything other than the PDF. They never had a need to go and build an API, because everything is tightly coupled, and so it's really raising awareness that we now want to be able to harness the power of an AI coworker, and just like any coworker, we want to make our systems accessible and usable, or else they can't help us, if that coworker can only see some one-tenth the data that we can see, well, it can only help us on one-tenth of the work that we do. And what good is that? And so we need to open people's eyes to the fact that this is the future, and here is what we should be doing today to prepare for that future, and that is enabling our systems for not just humans to use, but also for AI to use. 

The conversation shouldn't be about adoption anymore, it's more about acceleration.

For reprint and licensing requests for this article, click here.
Artificial intelligence Life insurance Customer experience Data Analytics
MORE FROM DIGITAL INSURANCE