Avoiding common insurance data pitfalls

di-server-stock-062018-1.jpg
Workings servers inside pod one of IBM's Softlayer data center in Dallas, Texas, on Jan. 16, 2014. Photographer: Ben Torres/Bloomberg
Ben Torres/Bloomberg

Insurance carriers today know that data is at the core of their business – in fact they’re investing millions of dollars into analytics, artificial intelligence and machine learning solutions to help convert that data into insights that can help them make smart business decisions. But if any of the data sets that they’re using includes faulty information, no matter how good the analytics capabilities may be, the results will be wrong and could have costly consequences.

This underscores the importance of identifying and integrating only the highest quality data, which has been cleansed, normalized and standardized to ensure it is able to be easily integrated into an insurer’s existing workflow. When evaluating a data set, there are a series of factors and corresponding questions that can help assess the quality of the data and whether it’s worth incorporating into your business.

  • First, is the source of the data reliable, up-to-date and accurate?

Second, is the data complete or are there gaps or missing values, and, if gaps do exist, can that missing data easily be corrected or supplemented?

  • Third, does the desired use of the data comply with applicable federal and state laws?
  • Fourth, who owns the data?
  • Fifth, since bias can impact the quality of data, how were the outcomes generated and were they screened?
  • And finally, are there ways to enrich or benchmark the data for greater insights? Reviewing the data with these questions and factors in mind can help to determine the quality and value of the data and where it may need to be supplemented.

To help ensure that high quality data is being used, I recommend that carriers implement a thoughtful strategy and robust quality assurance process because not only does the quality of the data impact a carrier’s bottom line, it also affects their customers’ experience. In order to develop this data quality strategy, I have five best practices insurance carriers should follow to avoid common data quality pitfalls:

Identify data experts and trusted sources: Recognizing and sourcing data from the leading data experts in the industry is important and can help ensure the data is of quality. And furthermore, developing relationships with these experts can set a business up to consistently get trusted data, even as data evolves and new data sets emerge that will set a carrier up for long term success.

Take a team approach: Each aspect of your business – business analytics, compliance, customer service, etc. – all use the data in different ways. This should be taken into consideration when new data is brought into the business as well as to make sure that data is valuable for the entire company. To ensure this team approach, leaders from across all areas of the business should collaborate to make certain that all of the data quality needs are met.

Implement a review process and regular data monitoring: The review process should evaluate the data set based on the factors outlined above, and in particular ensure that there is no data bias as that is one of the factors often overlooked. Reviewing the data at the onset can help to mitigate any issues before data models are developed and results are produced. And having a process in place with defined metrics, regular check points and a triage process, along with consistent monitoring, can help to further ensure that data quality is preserved.

Assign a data owner: Establish a team or group responsible for understanding, normalizing and analyzing the data to determine the best uses of the data and guarantee that it’s being used to its fullest capacity. The data owner can also make sure that a plan is in place to track data quality to ensure it doesn’t drift or decrease over time.

Improve data literacy: Building out a data science team and training every member of the team on how to review the key data factors and in particular identify any inaccuracies, errors or data bias is the essential foundation to a strong data team. Improving data literacy across the team can also empower employees to make their own analysis and recommendations of when new analytics or solutions may be needed to enhance insights across different areas of the business.

Whether insurers are using data to price new policies, evaluate claims or at renewal, it’s equally important that the information being used is accurate. And not only does high quality data protect a carrier’s bottom line, but it also can help to accelerate decision making, build more accurate pricing, enhance customer interactions, achieve positive results across the policy lifecycle and create that competitive advantage.

So why take the risk? Insurance carriers, fortunately, have an increasing amount of data from a wide variety of sources at their fingertips – but it’s all about what a carrier decides to use and what they do with it. I encourage insurers to take the time, now, to set up these best practices and make sure they’re proactively avoiding any data pitfalls. Doing so will help to ensure that the data they’re using is correctly informing business strategies, risk assessment and pricing and if so, it will ultimately help to drive profitability.

For reprint and licensing requests for this article, click here.
Big data Analytics Machine learning
MORE FROM DIGITAL INSURANCE