Insurers Tackle Data Quality With Robust Management

On its surface, the data-rich insurance industry seems like a perfect match for today’s sophisticated business intelligence and analytics technologies. But many carriers are learning a harsh rule of data: Garbage in, garbage out.

For insurers, data quality can be a tough nut to crack – and the value of the output from any analytics technology is directly related to its inputs. And data constantly flows into carriers from different sources and gets processed through different systems. In a dynamic and fragmented computing environment, it’s not surprising that insurers have trouble identifying that a customer who just purchased a new homeowner’s policy is the same customer whose 20-year-old term life policy is about to expire.

Those kinds of data connections, however, are essential for cross-selling. They’re also essential for accurately pricing policies, uncovering new market opportunities and ensuring compliance with regulatory mandates. Therefore, data management is no longer — as it may once have been — merely a bit of housekeeping that IT does to keep its own house in order. It has now become an existential imperative for any insurer hoping to successfully compete in an increasingly digital world.

“Data is a strategic asset,” declares Michael Stege, data governance director at Allstate. “It is therefore essential to treat it as such — and to put processes around it that enable customers, agents and employees to have the highest confidence in its quality.”

Each insurer, no matter what its size or line of business, sees data the same way. Paul Ayoub, senior vice president and CIO at commercial lines insurer FCCI Insurance Group, calls data a “critical differentiator.”
“Every insurance company writes policies and pays claims, so it’s really the right data used the right way that can give you an advantage in the market,” Ayoub says.

Dirty Data and its Downsides

In a perfect world, every piece of data, in every field, in every application is accurate, complete and up-to-date. Every insured, every type of coverage and every cause of loss would also be identically identifiable across all those applications.This is, of course, not the case. Issues of data quality and consistency crop up in insurance companies’ systems for all kinds of reasons, including:

  • • Inadequate source system controls that allow customer service reps, brokers and others to make data entry mistakes and omissions
  • • Legacy systems that utilize out-of-date formats and/or lack fields to capture data that has since become relevant to the business
  • • M&A activity that brings systems using different data formats and data definitions into the enterprise
  • • External sources such as credit and motor vehicle agencies that may deliver imperfect data 
  • • Flawed data flows that distort data or propagate errors as data is automatically moved, aggregated or distributed across different systems

The consequences of resulting shortfalls in data quality are diverse. One is, obviously, inaccurate analytic outcomes. If a given business entity isn’t consistently identified in the same way across all the systems from which data is drawn for analysis, the results will be skewed. “The term ‘policyholder’ may refer only to active policyholders in one system — but to both active and inactive policyholders in another,” explains Allstate’s Stege. “Those kinds of discrepancies have significant impact on analytics done across multiple systems.”
Bad analytics can lead to all kinds of bad business outcomes, including inaccurate pricing, unrealistic sales projections and poor targeting of costly marketing efforts. Those bad outcomes can, in turn, dampen the overall enthusiasm of stakeholders across the business for analytics, resulting in lower adoption and less investment in enabling technologies. So what starts off as a data management problem can easily wind up as a surrender of analytic advantage to the competition.

Poor correlation of data between systems can also cause insurers to miss cross-sell and upsell opportunities. Donald Light, director of the Americas Property and Casualty Practice at Celent, suggests that bad data can undermine an insurer’s relationship with its customers in other ways. “If you share inaccurate information with a customer, that customer doesn’t just think you made a single mistake,” he says. “They may also start to wonder what else is going on ‘behind the curtain’ and become more skeptical about the company’s credibility more broadly.”

[See also: Half of insurers boost retention with analytics]

Bad data can also cost IT, business analysts and finance teams a lot of productivity. “If premiums don’t match when they should, for example, a lot of time and productivity go down the drain trying to get to the root of the problem,” notes FCCI’s Ayoub. “If you can eliminate those hunting expeditions, people can devote more of their energies to the innovation that will grow your business.”

Compliance problems are yet another possible consequence for insurers. After all, if an insurer doesn’t know exactly where all of its data goes and all the ways that data is used, it can’t readily prevent violations of privacy and confidentiality mandates. Inadequate data management may similarly lead to the security exposures that result when malicious internal users have access to much more data than they should.

Getting the Business on Board

Data management may appear at first to be primarily a technical challenge, but insurance companies that have succeeded in meeting that challenge are typically those that approach it first and foremost as an issue for the entire enterprise.

“Excellence in data governance requires a significant cultural change in the business,” Stege says. “If that change doesn’t take place, data quality issues will persist regardless of what you attempt to do from a purely technical perspective.”

Allstate is bringing about that cultural change through an enterprise-wide data stewardship program that embeds data specialists in the company’s business units to help end users understand how they use data today, and what kind of improvements might benefit them in the future.

“By following the flow of data from the moment it’s captured in a call center until it arrives into a system of record, you learn exactly who has access to it, who is authorized to update it, and how those events are captured in your metadata and data lineage,” Stege explains. “That way, you can make fact-based decisions about how to improve quality and control, while also ensuring that people have access to the right data at the right time in the right form.”

Another key facet of Allstate’s data stewardship program is assigning “business stewards” in the company’s business units who take responsibility for data flows within their defined area of accountability. “Ownership matters,” Stege says. “So when we need to know how users in some part of the business understand and define a concept like ‘household,’ we get a clear and authoritative answer.”

A third component of the Allstate program is proactive engagement with IT. Rather than wait until development projects are already in the works before evaluating how they will consume and produce data, Stege’s team tries to get in on the planning as early as possible. “Good data management discipline slows you down a little bit at the beginning of a project,” Stege admits. “But the benefits in terms of both application functionality and enterprise data governance at the back end of the project far outweigh whatever slight delay there might be on the front end.”

XL Group decided to tackle data management by creating an entirely new executive position — Head of Data, Analytics and Pricing — which is being filled by Tim Pitt, who started his life in the business as an actuary. The company recognized that the effectiveness of analytics and its positive impact on the business in the form of pricing is linked to how well the company manages data at a strategic enterprise level, Pitt says.

“Uniting all three disciplines organizationally under a single person helps ensure that we are thinking about and working holistically on these issues,” he explains. “The simple truth is that we can be more successful if we work together on these challenges, rather than each trying to solve our own little piece of the larger puzzle.”

That emphasis on a holistic approach to data management extends beyond Pitt’s team to line–of-business and IT leaders, who form a high-level steering group focused on optimizing data as a corporate asset. Similar to Allstate, “The idea is to get those who consume data and those who have to execute on their requirements sitting at the same table,” Pitt explains. “That way, we can focus our efforts on those measures that will most
directly move the needle on business performance.”

The Mechanics of Data Quality

From a technical perspective, the first step in any insurance company’s journey to better data management is to benchmark the current state of data quality. This assessment is essential, because insurers typically have a large number of small issues that, left unaddressed, may not cause tangible pain until they reach critical mass years later — at which point they can be costly to fix.

Some of those issues can be fairly subtle. To a layperson, for example, there might not seem to be much of a difference between a zero, a null value and a blank data field. But under the hood of reporting and analytic engines, those inputs can have very different implications.

Insurers can scan their enterprise environments for such data issues in a variety of ways. One method is to take a sample set from targeted source systems and test for common anomalies such as four-digit ZIP codes or people named “Smiht.”

Another popular technique involves a review of exception reports from extract, transform, load (ETL) and other data transformation tools. Still another is to log the discrepancies that pop up between repositories that should theoretically both contain only “gold standard” data, such as a general ledger system and a primary data warehouse.

Once this initial assessment has been made, the next step is to trace these issues to their root cause. In most environments, this can be a difficult and time-consuming task. “The process of following the history of a data discrepancy backward through multiple systems and steps is usually a manual one that requires considerable forensic expertise on the part of both IT and line–of-business stakeholders,” observes FCCI’s Ayoub.

With this insight into the nature of data quality issues, insurers can then set about the task of making necessary improvements. One common approach is to tighten up data entry controls at source systems themselves. This is usually not necessary with newer software, which tends to use functions such as drop-down menus and auto-complete prompts to ensure accurate inputs. But with older systems, it may be worthwhile to enhance existing user interfaces with such features.

In some cases, it may also be advisable to more substantially modify interfaces so that information currently entered into free-form text comment fields can instead be entered as a standardized code.

[See also: 10 Steps to Big Data Success]

That said, the enforcement of data quality controls at the point of entry is a classic example of how IT and the business have to work together to make sure they are on the same page. “Everyone would like to make sure that agents and staff enter data at the front end with maximum accuracy, but you also have to be realistic about the fact that their priority is to get a policy issued quickly for the customer who, in the case of personal lines, could be right there at the time,” Ayoub says. “A key requirement of effective leadership in enterprise data management is an ability to understand and resolve these tensions.”

Another useful technique is to implement an operational data store (ODS) that can serve as a transitional staging area for data as it moves from source systems to systems of record and/or analytic environments. In an ODS, data anomalies can be discovered and fixed using various types of matching processes and transformation rules-sets. Then, once the data is cleaned and normalized, it can be passed on to the systems where it will actually be used by the business.

Ideally, from this point on, any given piece of data should be passed in a controlled manner from one system to another. This controlled movement accomplishes two objectives. First, it mitigates the risk that errors or inconsistencies will slip into the data by having all connected systems rely on a “single version of the truth,” rather than having data re-entered manually on different systems. Second, it operationalizes the logical relationship between the various instances of the same piece of information.

“The ability to gain actionable, high-value insights from data is largely contingent on the ability to logically link data in different systems to each other,” says XL Group’s Pitt. “By making sure a shared field is passed from system A to system B, you protect that logical link from potential data disparities."

More Than Remediation

While data management initiatives often start to remediate problems, the discipline itself should not be viewed exclusively as a way to eliminate and avoid problems. Data management initiatives also contribute positively to business performance. “When you understand how users are actually consuming data and what they are trying to accomplish at the point of consumption, you can take inefficiencies out of their workflows and make sure they get data that is as accurate, complete and up-to-date as they need it to be,” says Allstate’s Stege.

Changes in the way data is handled at the user level can help the business in other ways, too. XL Group’s Pitt cites loss codes as an example. A company the size of XL Group has many claims systems that capture cause of loss in many different ways. Standardizing loss codes across systems enables actuaries and underwriters to gain a common view of claim activity that can potentially be correlated with other claimant attributes, such as the age of a claimant’s home or the geographic area where that claimant operates an automobile.

However, according to Pitt, loss codes can also be further redesigned to be more granularly descriptive of cause. This tuning can help the business in all kinds of ways, from improving personalization of the customer experience to improving underwriting rules.

“Optimization of data is an evolving process,” he says. “The idea is to continually work on making your data as relevant to the business as possible as you keep learning more about what business users need, and as those needs change in response to the changing realities of the market.”

Garrett Flynn, information management principal in KPMG’s Financial Services practice, believes that disciplined data management can also contribute to business performance by not only improving analytic results, but facilitating executive-level acceptance of those results.

“When you present analytics findings to executives, you have to be prepared to answer the hard questions they have about how those findings are tied to operational and general ledger systems,” Flynn says. “If you don’t have solid ‘connective tissue’ between your data’s systems of origin and your analytics results, you can’t answer those questions with confidence — which will create counter-productive doubt in the minds of your top stakeholders.”

Stege agrees. “Sufficient, accurate metadata and data lineage are central to optimizing data confidence,” he says. “They are the empirical evidence of how well or how poorly your data governance processes are actually working.”

Scoping the Investment

Of course, no company has unlimited resources to devote to data management. With so many other possible investments — including core systems modernization, mobility, analytics and big data — competing for limited capital budget allocations, insurers have to make sure they rightsize their commitment to such initiatives.

One recommendation most expert practitioners offer is to avoid aiming at perfection for the sake of perfection. “Master data management is a noble endeavor, but it can be a somewhat expensive endeavor to undertake for its own sake,” says KPMG’s Flynn. “But if you can impact the business with a use-case that only needs to address 20% of your customer data, then that’s where you should at least initially restrict your focus and your spend.”

At XL Group, for example, a key focus is on broker identities. Brokers often operate under multiple legal names, and those names may be not be identical in every one of the company’s systems, especially when the company has acquired multiple insurers over the years.

[See also: Inside a Big Data Headquarters]

Pitt and his team have therefore given priority to rationalizing broker identities across systems.

“It is very important for us to understand the total business we are doing with each broker,” he explains. “We can only do that accurately if their identities are consistent and well-defined across all our systems.”

FCCI’s Ayoub adds that an honest tabulation of all the time and effort that skilled staff sink into chasing down data-related problems can be used to rightsize and cost-justify investments in data management: “You have to take a long, hard look at how many hours people are losing because something isn’t right about your data — and everything they could be doing if they weren’t having to do that.”

Anyone leading a data management initiative has a couple of other challenges to tackle even after building a partnership with the business, assessing the size of the problem, and getting the green light for a rightsized effort. The first challenge is figuring out which technologies to use. Vendors offer a wide range of solutions with a wide range of price tags (see sidebar), and their competing claims can be confusing.

Also, with the massive interest in big data across insurance, financial services and other markets, capital is flooding into this segment of the software business, which is leading to a relentless stream of innovation and pseudo-innovation. “It can be very difficult to commit to a data management solution today when you know something new and possibly better is going to be available tomorrow,” Pitt says.

The other challenge is finding the right people to populate the data management team. Good candidates can be hard to find, because they have to possess the geekiness of a data analyst, the business acumen of a workflow consultant and the observation skills of an anthropologist. “The success of any data management effort ultimately depends on the hard work of people who have the right set of skills and who really enjoy working with data,” Stege says. “Given the importance of data to the future of our business, we are actively engaged in finding and recruiting those kinds of people to come and work at Allstate.”

For reprint and licensing requests for this article, click here.
Analytics Data and information management
MORE FROM DIGITAL INSURANCE