A recent survey of the financial services sector conducted by The Economist Intelligence Unit (EIU), sponsored by SAS, a software provider, found that about 41% of respondents agreed that data quality and availability will play a key role in risk management over the next three years.

It struck me that this should be fairly obvious to all of us in the insurance industry, where evaluating and mitigating risk is our very lifeblood. Yet when I look out at the insurance landscape and see multiple enterprises burdened by redundant silos of information and multiple (yet slightly different) versions of the same data, I wonder if insurers are really getting it.  

Am I Chicken Little warning the industry that the sky is falling? After all, this industry remains one of the strongest parts of the financial services sector despite the need to clean up its data. Perhaps in a more robust economic environment, my concern could be dismissed as trivial, if not delusional. In the midst of the largest economic downturn since the Great Depression, however, the smallest problem may be magnified to the point where it threatens the enterprise, if not the company altogether.  

Insurance CIOs, much to their credit, seem to realize the urgency here and are talking up the need to sanitize the data, which is the insurer’s most valuable possession. At the recent IASA Educational Conference and Business Show, Great American Insurance CIO Piyush Singh told me, “Quality goes down when we don’t analyze the data well,” and dirty data is difficult to analyze under any circumstances.  

Similarly, Stephen J. Byrne, VP, Agency & Field Automation Technologies, for Harleysville Insurance said, “Data quality is the most important aspect of believability [of our information].”  

We are beginning to see, in these and other insurance IT executives, a realization that having high-quality data that has only one accurate version and can be viewed all in one place will become a competitive advantage for insurers. The question then becomes: How do we ensure high data quality?

Consolidating risk governance was cited as important by 33% of EIU respondents, and it certainly makes sense to have that responsibility in one place. The respondents also mentioned developing a companywide approach to risk, and improving the technology infrastructure.

All of these suggestions have merit, and they point to a single goal of emphasizing the importance of clean, non-redundant, easily accessible data in the enterprise, and in the organization as a whole. Having data that is reliable and not error-prone is critical to minimizing risk in our increasingly competitive sector. You’d think that would be second nature to the insurance industry, but sometimes there is wisdom in stating the obvious.

Ara C. Trembly (www.aratremblytechnology.com) is the founder of Ara Trembly, The Tech Consultant and a longtime observer of technology in insurance and financial services. He can be reached at ara@aratremblytechnology.com.

The opinions posted in this blog do not necessarily reflect those of Insurance Networking News or SourceMedia.

Register or login for access to this item and much more

All Digital Insurance content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access