Three steps for managing global insurance data

As if insurers weren’t swimming upstream before the challenges of Big Data erupted onto the scene, the insurtech movement has more than compounded the problem. Technology has made this an infinitely smaller world, with fewer physical barriers to working across borders, which means data has gone global, and insurers must now pay attention to more than just local regulation and legislation.

Even for those insurers doing business only in the U.S. right now, it is essential to know what other countries are doing. With the adoption of the General Data Protection Regulation (GDPR) in the European Union (EU), the U.S. is starting to feel pressure to clamp down on the way companies use personally identifiable information (PII). On the consumer side, there is a demand for greater transparency in how data is used and shared, and better control over both.

Undeniably, digital modernization and emerging insurtech innovations are helping the data landscape in the global insurance industry evolve. That presents opportunities for insurers to take more control by reimagining data technology stacks, breaking apart existing data siloes, and cultivating relationships with new third-party data providers. Such steps will allow insurers to use new and emerging sources of data to streamline or improve operations and processes, reduce costs, and boost efficiency.

Reimagining the Data Technology Stack
Operating in an environment of legacy applications complicated by merger and acquisition (M&A) activity, CIOs for insurance organizations must focus on streamlining data management and modernizing the internal IT infrastructure to take advantage of emerging, alternative data sources. According to PwC, there were approximately $4.4 billion in announced M&A deals during the third quarter of 2019 alone, and CIOs are being challenged to consolidate data across acquired businesses owing to multiple data formats, disparate data management platforms, and overwhelming compliance requirements.

Traditionally, insurers have relied on structured data stores and warehouses, which are significantly more expensive to run and maintain and may have several limitations when it comes to managing heterogeneous data. Insurance companies must invest in a next-generation data engine to enable data-driven decision making, which, according to some estimates, can bring about improvements in an insurer’s combined ratio. The data engine is the technology stack that not only captures data to generate actionable insights but also integrates those insights with critical business processes in real-time. Such new-age data engines promote innovation by facilitating the rapid development and scaling up of data pilot projects. Along with a robust data engine, insurers need an efficient data model, a reusable data ingestion framework, and an executive dashboard to accelerate the data-to-insights-to-action lifecycle.

Breaking Apart Data Siloes
Many insurers today are not operating with a single source of truth due to the lack of a shared data repository or even a well-defined taxonomy, which would enable the accurate combination of many diverse types of data. Data siloes built over the years and reinforced during the previously mentioned M&A activity are preventing many insurance organizations from seamlessly integrating data from ecosystem partners. Moreover, with the proliferation of cloud in the insurance world, the data resides in hybrid environments – a combination of on-premise and cloud data repositories.

di-server-stock-052220
Cables connect server racks at the 5G lab at the Vodafone Kabel Deutschland GmbH campus in Duesseldorf, Germany, on Tuesday, Jan. 21, 2020. The European Union won’t explicitly ban Huawei Technologies Co. or other 5G equipment vendors when the bloc unveils guidelines for member states to mitigate security risks. Photographer: Wolfram Schroll/Bloomberg

Combining all that data – existing and new – to form a single data universe can be done by leveraging a multi-model data hub. The data hub integrates data from different sources, standardizes it, and provides real-time data access for analytics and compliance and management reporting. What makes multi-model data hubs unique is their ability to support multiple data models in their native form by employing a single, integrated backend, thereby also enhancing data availability, consistency, and security. Another way of achieving a single source of truth, albeit cumbersome, is to use an open-platform data repository for storing validated and normalized data from multiple, disparate data sources.

In all this, global data governance plays a critical role in building the trustworthiness of data through stringent data quality assurance initiatives and enhanced traceability of data from source to consumption. With data privacy regulations such as GDPR now in force, governance workflows ensure PII stored in a centralized repository is managed effectively to prevent compliance lapses. Further, watertight governance also delivers a significantly better experience for business users with marked improvements in data accessibility and usability.

Embracing New Third-Party Data Providers
Unlike established third-party data providers, like Verisk and LexisNexis, today, many insurtech startup data companies are finding new, innovative ways of slicing, dicing, and delivering information relevant to the assessment and mitigation of risk. And, these insurtech challengers are offering access to emerging, open source or publicly available datasets at infinitely more affordable prices than those typical of the industry’s legacy data providers. By integrating third-party data into customer-facing business workflows, insurers can enable the pre-filling of form fields and ensure data completeness, resulting in enhanced policyholder experience and risk management.

There have also been significant developments in opensource technology with industry behemoths making efforts to drive industry-wide collaboration and consensus, resulting in acceptance of standardized data formats. For instance, the Catastrophe Exposure Data Exchange (CEDETM) database schema by AIR Worldwide aims to facilitate accurate and transparent data exchange across the insurance value chain.

As the industry begins to embed intelligent technologies, such as artificial intelligence and machine learning, at scale across business processes, the role of bolstered data management will become more crucial. The success of those initiatives will depend on the availability, quality, and integrity of data, which is closely tied with technology stack maturity, data management practices and governance, and in-house technical talent. The good news is that insurers and managing general agencies (MGAs) can today more easily leapfrog competitors by leveraging the data sciences expertise of industry-focused partners. Such strategic partnerships ensure faster time to production, lower implementation cost and risk, and streamlined change management. Insurance carriers who invest in developing collaborative relationships with digital-first, data-centric insurtech partners will be able to unearth the data gold mine, achieving operational excellence, sustainable business growth, and policyholder confidence.

For reprint and licensing requests for this article, click here.
Data and information management Big data Data security Data privacy rules
MORE FROM DIGITAL INSURANCE