This blog is the third in a three-part series meant to help insurers address the typical challenges that come with modernizing IT operations. The first part outlined "4 Common Data Challenges Faced During Modernization," to read it,
Mid-market P&C carriers looking to offer customer self-service usually face four key challenges: Customer data is fragmented across multiple source systems; data formats across systems are inconsistent; data is lacking in quality; and systems are not available 24/7.
There are four solution patterns that are commonly used to meet these challenges: establishing a service-oriented architecture; leveraging a data warehouse; modernizing core systems; and instituting a data management program. The particular solution a carrier pursues will ultimately depend on its individual context.
Here are the practical steps that a carrier can take toward instituting its own data management program that can successfully support customer self-service. It should have the following five characteristics:
A consolidated data repository: The antidote to data fragmentation is a single repository that consolidates data from all systems that are a primary source of customer data. For the typical carrier, this will include systems for quoting, policy administration, CRM, billing and claims. A consolidated repository results in a replicated copy of data, which is a typical allergy of traditional insurance IT departments. Managing the data replication through defined ETL processes will often preempt the symptoms of such an allergy.
A canonical data model: To address inconsistencies in data formats used within the primary systems, the consolidated data repository must use a canonical data model. All data feeding into the repository must conform to this model. To develop the data model pragmatically, simultaneously using both a top-down and a bottom-up approach will provide the right balance between theory and practice. Industry-standard data models developed by organizations such as the Object Management Group and ACORD will serve as a good starting point for the top-down analysis. The bottom-up analysis can start from existing source system data sets.
An “Operational Data Store” mindset – a Jedi mind trick: Modern operational systems often use an ODS to expose their data for downstream usage. The typical motivation for this is to eliminate negative performance impacts of external querying while still allowing external querying of data in an operational, as opposed to analytical, format. Advertising the consolidated data repository built with a canonical data model as an ODS will shift the organizational view of the repository from one of a single-system database to that of an enterprise asset that can be leveraged for additional operational needs. This is the data management program’s equivalent of a Jedi mind trick!
24/7/365 availability: To adequately position the data repository as an enterprise asset, it must be highly available. For traditional insurance IT departments, 24/7/365 availability might be a new paradigm.
Successful implementations will require adoption of patterns for high availability at multiple levels. At the infrastructure level, useful patterns would include clustering for fail-over, mirrored disks, data replication, load balancing, redundancy, etc.
At the SDLC level, techniques such as continuous integration, automated and hot deployments, automated test suites, etc., will prove to be necessary. At the integration architecture level – for systems needing access to data in the consolidated repository – patterns such as asynchronicity, loose coupling, caching, etc., will need to be followed.
Encryption of sensitive data: Once data from multiple systems is consolidated into a single repository, the impact of a potential breech in security will be amplified several-fold – and breeches will happen; it’s only a matter of time, be they internal or external, innocent or malicious. To mitigate some of that risk, it’s worthwhile to invest in infrastructure level encryption (options are available in each of the storage, database, and data access layers) of, at a minimum, sensitive data.
A successful data management program spans several IT disciplines. To ensure coherency across all of them, oversight from a versatile architect capable of conceiving infrastructure, data and integration architectures will prove invaluable.
Samir Ahmed is an architect with X by 2, a technology company in Farmington Hills, Mich., specializing in software and data architecture and transformation projects for the insurance industry.
Readers are encouraged to respond to Samir using the “Add Your Comments” box below. He can also be reached at
This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.
The opinions of bloggers on