Track 1: Solving data accessibility to enable digital transformation

Digital transformation is successful only when data is available and accessible.  Unfortunately, enterprise level data availability and accessibility are obstacles that many insurance organizations have yet to conquer but which must be overcome before the potential of advanced analytics, AI, and other digital technologies can be fully leveraged to improve risk management, product profitability, and customer experience. This session will share lessons learned on how to transform data from an underutilized possession to your greatest asset.

Key Takeaways:
  • Advantages of and insight into leveraging a factory approach to establish an enterprise data foundation.
  • Lessons learned on how to address the biggest obstacles to enterprise data availability and accessibility.
  • Insight into the realities and myths of end user data accessibility strategies.
Transcript:

Bryan Guilbeault (00:10):

We have more than 40 offices throughout the country. All those offices are on different systems and even some of them on the same systems, we're capturing data differently. So one of the exercises to do was to get on a common binding desktop platform for a delegated binding authority business. And that mean migrating 40 of these offices that are our delegate binding offices onto a single underwriting desktop over a single policy amend system, enormous tasks, enormous seat, and the real kicker there, they wanted to have it done within 12 to 16 months. That's to break the numbers down. As she said, 40 offices. We broke it into 11 groups. So that's 11 implementations over, let's just say 18 months. And we just finished the last office a couple months ago that needed some kind of approach to even meet that deadline was to be able to have a repeatable process. Something that we didn't have to reinvent the wheel every time we walked into the data, understood the data and write code and lay down code. So we needed to create your typical ETL process to be able to reuse it and to more importantly have enough time to build in a proper amount of testing, iteration testing of that data. Our challenges were the data was not consistent. One office had 400 lines of business. There are not 400 lines of business. So we had to translate that into our regular lines of business. The end result of all this was we were trying to get data that we can actually use to report on where we could provide accessible real-time data to all our binding, delegated binding authority and just add more complexity to the project. They also said, well, we want you to consolidate the London practice as well. And that involved all the risk level data because we did that practice in-house and rated it. So we had to get into the rating risk level and rating information of that. So all that had to be combined in to roll out and we had to put a team in place that could not get burnt out after the first, second or third, but be able to do that factory approach of build a machine, a lot of upfront investments to be able to go into these offices, understand what the mapping is, but what we call to dynamically create a mapping table that didn't require us to throw down any code. We're able to put the mapping in tables and then the code would generate around it into our ETL process. Now we had some other available tools. We had a solid insurance based data model to migrate to We to also, we had a good set of API tools to use to take that data and get it into the right transaction form. So we did have some favorable things working for us, but we had to put it together into a factory approach and, it was a lot of stress. Probably after the third, fourth, maybe fourth implementation, we started hitting our stride. And what that meant is the team was clicking, the data was coming in, and we were able to get in more test cycles, which is the most important part of this, is to be able to get through more test cycles. And the first couple offices, not lying, the first office we got through one test cycle, second office, maybe two. By the time we got to the fourth or fifth, the team's going through eight test cycles. So we are getting through not just that 80% of the data, but now dealing with 20% of the problem child. That was taking up a lot of our time.

Bruce Broussard (03:35):

We were building a lot of the tools that we use to get through the test cycles in the first few. So the first few are really creating that process. You're learning how to abstract the data and certainly not until you get to the second and third that you really start to experience. You have what you think is going to be different. And then when you start hitting the second, third, fourth offices, that's where we really saw what was different. And it's never always exactly what you think, but a lot of it is in how you frame the problem. And so much of that is just looking at things, being able to abstract in a way that virtually anything can be made repeatable if you drive it into small enough segments. And that allowed us to keep, we had a very solid team. It was very consistent throughout the process. Everybody had very well-defined roles. You knew exactly what was expected on the other side and managing the expectations of each office going in. Even though we're doing 40 offices, each office is only getting hit once. And so being able to walk in by the time you get to the fourth, fifth office, being able to explain to them this is what is going to happen, managing their expectations so that they understood what their participation was going to be, what was going to happen to them, what was the timeframe in which they were going to see things, what were they going to see, what kind of challenges were we going to have and how would we deal with them so that they could continue to operate their business while the processes were going on? Where I think things that were extremely important.

Bryan Guilbeault (05:06):

And that was key because we didn't bring down production. We didn't say production is down for two days or a day. Production stayed up. So part of that factory approach was to not impact production. So we had to prep the data, extract it, translate it, load it, test it, go through iteration, and get it in such a mode that when we got ready to insert into production, it would not bring down impact production. So that was part of the factory approach as well.

Karlyn Carnahan (05:34):

Yeah. So Bruce, you've been working on data conversion your entire career. How common is the factory approach and hang on, and if you thinking about using a factory approach, what would be a consideration to say yes you should, or no, you shouldn't. How do you think about whether or not to use a factory approach?

Bruce Broussard (05:56):

Yeah, a lot of times I think people get stuck in, I may be converting, I'm doing a transformation just maybe from a legacy policy system to a new policy system. Well, I'm just going from one system to the other. It's a one time thing. There's really not a lot of economy of scale there. But the reality is most of the time you're dealing with multiple lines of business. But if you've got multiple lines of business, most of the time, 60, 70% of the data across the different lines of business is going to be common. There are different structures and processes for different tables through the different pieces, whether it's a transactional history or things like that. And so if you break the challenge up into small enough increments, you can leverage that factory approach. And it's really key to recognizing what the work effort is and what the scale is so that you can get specific folks focused on a certain set of things that they can do repetitively so that they're going to be exceptions in everything you hit. The key in successfully executing a factory approach is to get people to really understand the basics, can routinely execute the normal process so that all the brain power and experience that they have can focus on the 5, 10, 15, 20% of the exceptions that occur in each iteration. Then you can clobber those very quickly and you can get to a very predictable schedule. Even though you don't know what the process is going to challenge the next time through. You can get very predictable on what it is because you've got it narrowed down to a fairly small segment. And that's really key to any factory approach. And I rarely see projects where that isn't an opportunity, right? Yeah. I mean anything that isn't very small, 10, 12 week one kind of one thing, time through almost all of those can be.

Bryan Guilbeault (08:04):

We had issues where the team started focusing on the 20% exceptions and we didn't have the tool built and that was starting to hit into the deadlines and get a frustrated team. So that's part of making sure we create the right tools for the bigger picture. So you could have focus to say that 20%, which is the most problematic data you're going to deal with do you have people focus on that? There's no way to do a project, look at this. If the team, every time you're doing that takes three months to figure out the data before you can actually start getting the data moving.

Karlyn Carnahan (08:36):

So let's dig into that a little bit more because you've talked about 20% being variable. You said 10 to 15, you said 40 to 60% is common. So somewhere between 40% or more is not common data. And you mentioned 40 offices, you mentioned one that had defined 400 different versions of a line of business, right? And so given these differences among the data sources, given the wide variety, especially in your project, what does it mean to apply a factory? How do you actually deal with that using a factory approach? How do you think about addressing and analyzing that data to even figure out where those differences are and how did that impact your resourcing of the project?

Bryan Guilbeault (09:17):

Well, you got to know your end product. You have to really understand the end product. And in our case, the end product was getting everyone, all our binding offices on the same level playing field. In other words, these are the line. A commercial package is a commercial package. It should not have any premium breakout on a commercial package it should be on. So getting everyone on that same level field with the same understanding of what the end product should be, then you can work towards building that. And that was tough because there's a lot of long school underwriters out there that have saying, this is how I define my product. And we had to work with them to say, well, that's fine, but you cannot share your data across other binding offices. We cannot leverage this in reports. We cannot do analysis. We cannot do all the stuff that we want to get out of this. When we move it to the data warehouse, what do we need to do? So that's step one. Step two is now, once you're in that approach is like we said, take the analysis out of the programmers, the developers, put it back in the tools of the business analyst to be out there and saying, okay, what's the difference of this office and what our product we're trying to do and get us from a business perspective what that translation needs to happen and be able to put it in English in a table, table size it therefore now the code. So we didn't have to write code every time we knew that this line of business went to this line of business. It's the mapping. Same with entities, same with carriers markets, the list goes on. Even our retail agents weren't all defined consistently across all our offices. So it just went on and went on to that.

Karlyn Carnahan (10:59):

So there's a lot of challenges with that. I'm going to ask you, Bruce to comment on this because the minute that you get rid of all of that historical data, all of the trends, all of the stuff that I've relied on for years and years and years, you have this massive change management issue. And so Bruce, my question for you is just making this new business available to this new data available to the business in a consistent manner so that going forward, everything will be consistent. That doesn't solve the overall problem. Understanding the data context and understanding those relationships then becomes key to getting the value from the data. And that's not always obvious, especially when you're doing a massive conversion like this. So what kinds of things can you do? What kinds of tools or practices or advice, how do you make the data available to the business with all of the information that they need to leverage it effectively?

Bruce Broussard (11:51):

Yeah I think, and I cringe every time I sit in a meeting and have business users or technical folks tell me, well just make the data available. I'll be able to do it. No, you're not. You're going to fail. I'll see you in two years after you get fired.

Karlyn Carnahan (12:07):

That's encouraging.

Bruce Broussard (12:09):

So some of you in the back know I've actually said that to customers on occasion, but the business context and the use and the context of the data is so important. And we've all sit sat in meetings where you have a CFO sitting at the table and an actuary sitting at the table and you ask them, what was the net premium last month from workers' comp? And you've got two different numbers because looking at it two different ways from an effective day standpoint or accounting posting day standpoint, and it's not that the data's wrong, it's that there are different contexts through which the data needs to be used. And so we look at things in terms of there has to be a static, universally accepted data dictionary and data lexicon through which data is viewed throughout the enterprise. But then you need to be able to make it available to people in the context in which they have to use it. So what we typically do in an ODS or a data warehouse type layer, we'll have a conformed layer that is really at that enterprise definition level. We then create marts because frankly, marts are cheap, especially in cloud world now, right? Marts are easy to spin up and cheap with lineage. So you've got insight reconciliation, so you have confidence in the quality of the data that provides the data in the context in which it will be used. We've got one for underwriting, we've got one for the actuaries, we've got one for the accounting and finance folks. We've got people who are going to be looking at it from a customer service standpoint. And so you provide the data in the context in which they'll be able to use it, but also making sure that they have insight into where the data was sourced and how it reconciles back so they have confidence. The minute users lose confidence in the data, that solution is dead. That's an unrecoverable situation. I made a very nice career, frankly, and to walk into places where that's happened. And so we start with our software solution really at the reconciliation and lineage level because if you can't prove that to people, they're not going to have confidence in the situation.

Bryan Guilbeault (14:09):

And that's exactly what we had. That was our end product. We had those fights internally for years. And even during this project is they would come back and question our numbers. And the march is a great way to explain it. We want our submission quote buying data in this mart. We want our policy level data in this. We want our revenue data in this because the business is looking at the data different way. But at the end of the day, we have to prove that the source was right. And when you have so many different sources and so many data dictionaries that do not match or terminology, that does not match or understanding of the data from different user perspective, that does not match, no matter what you do, you're going to end up with the wrong product. And that was a challenge, was having to build this in and say, guys, this is our end game here. This is how you're going to pull an in force. This is your submission quote, buying ratio reports. Now this is your book business or building report. This is how we're going to look at it. If you're going after premium, it's going to be by effective date, but we can't turn it upside down and do it by accounting date. But we had to figure all that out, build it into the model, and making sure as we move that data from all these different systems and these legacy systems, that it got translated appropriately because we couldn't lose the confidence of the business.

Karlyn Carnahan (15:22):

And so let me dig into that one a little bit more because part of what the business wants is all of the data. I had an actuary who worked for me once and he told me what he wanted was all of the existing data that was in the system plus all of the metadata around that data, plus all possible third party data for all time. And that was his starting point, Okay.

Bryan Guilbeault (15:42):

It must be the same guy. That guy had just emailed me two days ago and said, can you dump every field in a database? I want to see what's out there.

Karlyn Carnahan (15:49):

And so this is not realistic. And when you start off having these multiple systems and this data that doesn't match, how did your firm go about agreeing on what level of data to convert and how much data is enough? And what happened to the data that you didn't convert well? And so how did you go about that decision making process?

Bryan Guilbeault (16:08):

Yeah, let's just say agree to disagree. When we had those conversations. So one was we needed enough data to come over to run the business, number one. So we're bringing new business and we're bringing renewals. If we're bringing renewals, we got to bring at least one year over. Then the discussion got a little bit more intent in saying, well, we want 10 years, we want five years, we want three years. It was always more and more. Well, there was a time factor of what we can actually move, what we can translate, what we can cleanse, and how every time you go back one more year on these legacy systems, the data gets worse, worse, progressively worse. So we had to explain to the business the executives and say, the further we go back, the data's not going to have value and explain why it would not have value. So we agreed on, anyone can agree on any, but we agreed on a random, not a random, but a logical period to convert and migrate again, to run the business and to have enough current and prior year reports and to go back what happened to the data that we didn't bring over. It's still there. It's still in our legacy systems. We are working on a data warehouse project that will take that data unstructured, however bad it is and put it out. We're not going to lose it. But at the end of the day, the data that we bring that we brought over is sufficient for the actuarial team to do business. And really from an actuarial thing, the majority of the data they want is current data. They want to know what the property stuff is today on enforced policies, and going back in history is not always as good as it should be.

Karlyn Carnahan (17:37):

So Bruce, I want to dig into this a little bit with you because you see a lot of the challenges that companies have as they start to work to make this data available. And one of the challenges is you've got different kinds of users, you have power users and casual users. You have people who are interpreting the data through standard reports or targeted data marts. And so how do you think about balancing the benefits and the difficulties to prioritize addressing these challenges? Are there advantages to addressing them in any particular order, or what are some of the approaches that you have found to help deal with this?

Bruce Broussard (18:12):

So we always segment our user communities. There are going to be the power users who can go after the data when pointed out to them, if it's properly defined. And with fairly minimal education and stuff, they can be fairly self-sufficient, but we really focus on making sure documentation, the data dictionary, the lineage visibility, the reconciliations are all available online in the same way. They pull up a dashboard or report, they can look at the lineage or they can look at the dictionary. If they have to go looking for it, they won't. And then they will misuse it, and then there will be an argument and then things get ugly. And so the more we try to put online there, the better. But we always try to focus on providing similar constructs regardless of the level of users, whether it's a power user or someone who really just needs to be able to click on a report that they've subscribed to. It needs to be coming from the same source so that you can be remain confident in the quality of the data and in the context in which the data is being presented. And I think one of the things that was really important in what Bryan was talking about, and it also ties back to that factory approach, is as we got through the different offices and stuff, every time you hit an exception, it's only an exception once because as Bryan said, the first time you hit the exception, it then becomes part of the rules. If you hit that something that you hit in the eighth office is now covered by the time you get to the 12th and 13th offices, the number of exceptions become less and less, your process gets more predictable. And again, it goes down to how you define what the factory process is working against, how many iterations you get and how quickly you can go through the process. So I think it really, experience really helps, I think defining those things out. But how many things are more important in our IT projects than being predictable in cost and time? How many of us see these projects, especially data projects, have a much higher fail rate than even other transformation projects.

Karlyn Carnahan (20:20):

And so speaking of failures, Bryan, your project was huge and you did it in a ridiculously fast amount of time and it was successful. But if you were starting all over today, is there anything that you would've done differently?

Bryan Guilbeault (20:34):

That's a tough one. There's probably quite a few. I mentioned it earlier. I think building in a more structured test iteration process in it. We spent a lot of time upfront building the tool. And part of that we should have spent a lot more time is how are we going to test this? And we ended up, as we got through the offices, getting better at how to go through test iterations, how to load the data, how to clean it out, how to reload it. That'd probably be the biggest one. The second one is, this will be a tough one, is pushing back on the business a little bit more than we should have, or we should have pushed back on the business. We got a lot of, that's okay, just we'll deal with it. And that has come back to haunt us, is we push stuff out there and saying, we just need to get this project in. We just need to get this office live and we'll go back and clean it up as we get going. Well that put an extra load on the stabilization period because we should have just dealt with it as we did the project and not said, move on, we'll deal with it later. And some of that came back to hurt us. As in, I guess a example is we decided to load because we couldn't translate a market correctly. We loaded a dummy market. It was on submissions, so no big deal, but that came back to haunt us a bit better. So we had certain things like that that we probably should have just said, time out, let's take some time. Slow down, let's figure this out. But we were on that fast track to get things done so well, and at that point.

Bruce Broussard (22:08):

That point, you didn't have the large predictable yet. right? You're still fighting for credibility at that point.

Bryan Guilbeault (22:13):

Yes.

Bruce Broussard (22:14):

It's easy to say in hindsight.

Bryan Guilbeault (22:16):

Yeah, it's easy to say.

Bruce Broussard (22:16):

But you might not have survived. The first couple iterations that did not make sense.

Bryan Guilbeault (22:20):

And we had the offices out there that would kick back and say, we're not ready to go live. We don't like what we're seeing. We're not seeing it the way we used to see it. So we had offices that said, pull us out of the rotation. We actually did pull an office out of the rotation and put them to the end of the rotation. They still weren't any happy once we got to them, but we pushed them over the edge.

Karlyn Carnahan (22:41):

Well, we're just about out of time. And Bruce, I want to ask you, having done a lot of these projects and seen a lot of these projects, if you would give advice to other folks who are thinking about getting started on something like this, what are the top one or two things that you would advise folks on?

Bruce Broussard (22:55):

Well, I think, and Bryan said it earlier, I think having a clear vision for where you want to end up and having the political capital to get away with it is huge. If you're having to do the selling on what the end result is as you're trying to execute, it's a very difficult place to be it. It is survival potentially, but it's a much more difficult place to be. It's a lot easier to rally people around dealing with things. When you're dealing with data, it's kind of like an archeological dig, right? You never know exactly what you're going to run into until the shovel hits the ground. A lot of times you know that you're going to hit some exceptions and you can plan for a certain volume of exceptions and routines and stuff, but you don't know exactly what they're going to be. And if you don't have the buy-in, a lot of times people have been burned by failed projects and stuff. And so the minute they hit the first bump in the road.

Bryan Guilbeault (24:01):

They panic.

Bruce Broussard (24:02):

They set to start in and they can bail. So really having the political capital to survive the early thing. The other thing is really to engineer into the project early success. So important. I think a lot of people talk about it, but you can deal with things in small increments, get some success, build some credibility, and that can help give you the ability to execute down the line. Yeah, because these things are never easy.

Bryan Guilbeault (24:30):

And at the end of the day, all the kicking and fighting the company is in a much, much better position from a data perspective leading into our next project of getting all that into a proper data warehouse, data marts. So from our delegated binding authority, we're in a much, much better position from a data perspective, data dictionary definition of how things work.

Karlyn Carnahan (24:50):

So 30 minutes isn't nearly enough time to really get into all of this. The time went by in a nanosecond. These guys are both going to be around and I know that they both would be delighted to talk with you in more detail about exactly what they've been doing and how this project was successful. But in the meantime, I hope that you guys will all join me in thanking Bryan and Bruce for participating and sharing your story. Thanks so much.