Avoiding the Bermuda Triangle of Data

Data is not a new challenge for insurers. However, in this digital era, the sheer volume, veracity and variety of data available to insurers are staggering. It’s becoming increasingly important for insurance business leaders to understand how to find, analyze and use this wealth of data to its utmost advantage – whether that is crafting business strategy, enhancing the product portfolio or informing customer engagement.

Now, while it’s one thing to espouse the benefits of data, it’s quite another to put these practices into place. In fact, the problem we often see among our clients is that questions around data ownership, data quality and data security can sidetrack a conversation and alienate business stakeholders.  This is the “Bermuda Triangle” of big data and if one isn’t careful, it can swirl the conversation so drastically that the business imperative and objective of the conversation is lost.

How can you avoid the “Bermuda Triangle?”  The trick is to keep it simple. Some issues will never be black and white, and insurance leaders must be willing to accept that line of thinking, or they risk becoming obsolete.  While there are always new types of data to handle, compliance requirements with which to keep pace, and different business models that attempt to apply data analytics in innovative ways, there are no precedents or clearly defined approaches in this area. It’s just too new. That being said, we have seen a few processes that are critically important for avoiding the Bermuda Triangle.

Who Owns the Data?

Data ownership has always been a tricky question, and it has only gotten more complex. Not only are questions of data ownership happening within the insurance company, but also outside of the company – from carriers and brokers, or carriers and reinsurers. And as information begins to cross geographic and business lines, it becomes more difficult to pinpoint the source. This is especially true of data gathered from external sources, such as on social media channels.

The solution to answering this question lies in hiring data scientists who can work within a federated data stewardship model. These data scientists should have a background in business, technology, mathematics and/or statistics and be able to use their knowledge to formulate how the data can be used to increase business value. This model would instill accountability at the enterprise level, but also hold a deep rooted responsibility within specific lines of businesses or geographies. It’s this combination of business intelligence and tech savviness that will make it easy for these data scientists to eventually take ownership of the data.

Validating Your Information

Once you have the data scientists in place, the question of data quality must be addressed. With the amount of data produced and the number of sources growing every day, it is becoming increasingly problematic for companies to maintain the consistency and quality of the data. Couple the volume of data with the variety of unstructured data, i.e. data from social media channels, and structured data is only exacerbating the problem.

In today’s world, it is inevitable that data quality will be a concern. To improve the quality of data, the process should start by containing the problem at a manageable level and making incremental, scalable improvements. The essential data parameters first need to be defined and prioritized, and then efforts should be focused on cleansing and fixing data quality issues within those parameters. Gradually, proactive management of data quality through the right guidelines and stewardship will help increase the quality of data used. These organizational changes should be coupled with the use of technology tools that are designed to help with data quality, including profiling and feedback loops and front-end transactional systems.

Keep Your Data Secure

Along with data quality concerns, the introduction of structured and unstructured data within the business leads to security and privacy concerns.  If data is shared freely, it can be accessed through multiple, unregulated channels; it can cross any boundaries, geographical or otherwise, and its source is often unknown. Strict data regulations will always put a focus on data security and privacy in the highly regulated industry of insurance.

The federated data ownership model mentioned above can also be used here to ensure that data security and privacy policies are contextualized for an insurance company’s specific region. The current data masking and encryption methods may not be enough as companies find new ways to engage with customers or use technology to further their existing systems and processes.

In looking at how we can manage data ownership, data quality and data security, it’s clear that there is still a lot of progress to be made among insurers who are interested in using data for utmost results. But, this is a clear area of growth for the industry, and if we learn how to navigate around the Bermuda Triangle, rather than falling into it, we will be well on our way to achieving the maximum business benefits for big data.  

 

Readers are encouraged to respond using the “Add Your Comments” box below.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

For reprint and licensing requests for this article, click here.
Analytics
MORE FROM DIGITAL INSURANCE