Data management has developed a bad reputation. Too many insurance executives see it as little more than edits and testing. They consider it a set of steps added at the end of a project, when it could cause delays. And therein lies the problem. Carriers need to change their way of thinking and start building data management into the project from the beginning rather than incorporating it later.

With any project—whether it's building a house or a new policy system—whatever's added after the original plans have been set can delay completion and add costs. If data management concepts are built into a project, they won't add to the project timeline and may actually aid in project development and decrease total time and costs.

Data is not just a few numbers compiled for a report to some outside statistical or rating bureau. Data is the information used to underwrite and rate the policies. It is the information used to bill the insured's for their premiums. It is the information the claims adjuster uses to determine coverage and pay for losses in case of a claim. It is the information used to determine the adequacy of rates. It is the information used to evaluate the effectiveness of specific products or lines of business. Finally, information used for the financial reports indicates the financial stability of a company. Without quality data, insurance companies could not operate.

The question becomes, "What must we do to ensure that we have the necessary data?" The answer is that we must be aware of data management at every step of a project because each decision affects the quality of the data. Whether the outcome is good or bad, our decisions involve the data and, therefore, data management. Unfortunately, in many projects, team members overlook the potential benefits of good data management and end up making decisions that ultimately delay the project, drive up costs and result in poor-quality data.


The first data management tool to understand and use is metadata or "data about data." It includes data definitions, field or data element descriptions, valid values, tables, edits, collection techniques and storage criteria. The "standards" reduce development work. For example, existing standards for date fields dictate collecting the full month, day and year. Standards then specify storing the date in a YYYYMMDD format. Further examination of existing standards reveals edits that ensure collection of accurate data.

Failing to use existing standards increases the number of steps to perform, while boosting costs and lengthening the time needed to establish the data criteria. That's because failing to use existing standards necessitates developing new metadata. The new metadata may not match the existing standards for those data elements. That would add costs later when the data has to be converted to make it compatible with similar data items from other systems.

But, the best data management means nothing without an understanding of the needs of the users, especially those in underwriting, claims and management. And this understanding goes beyond simply determining what data is needed to underwrite and rate the policy, to adjust the claims and to determine the combined ratio. Instead, it includes determining what information underwriters will need about losses and what information they will need to evaluate the insured's experience when the renewal is processed. Claims people need information to settle claims, such as policy coverages, limits, deductibles, exclusions and thrown-in coverages. And, upper management needs information to determine the profitability of the product or line of business.

Statutes require that insurance companies submit data concerning policies and claims. Examining reporting requirements early in the project will provide time to submit questions to the various bureaus that require the reports and get their responses as the project is developing. This avoids delays and costly changes late in the project.


Then there are the issues of testing and the initial quality of the system and the data it produces. Testing has been misunderstood and viewed in a negative light. It is thought of as something done after the project has been completed-just before implementation. Putting off testing could delay the discovery of a major problem until late in the project development cycle, driving up the cost in delays and corrections.

The key to successful testing is to do it at every step of the project and not just at the end. Perform tests as "parallel steps" while the development is occurring, not in "series" after each step.

Use different types of testing at various phases of the project. Before programming begins, have representatives of various areas of the company review the specifications. That can prevent overlooking issues that could delay the project if discovered later. As programming progresses, put individual programs through "program testing."

Then, link the programs and begin "system testing." The goal is to ensure the individual components of the system work together.

As the system begins to take shape, users should test the system. Often, user testing is left until the end, if used at all. Getting users involved early lessens the possibility of a user pointing out a problem just before implementation. It enables users to begin making changes to their procedures. It brings "buy-in" from users because they feel they were involved in the development of the project and did not have it forced on them.

If data management, including testing, has been addressed throughout the project, the initial quality of the system and the quality of the data it produces should meet all needs. Unfortunately, it is not uncommon for data management issues to be ignored or put off, and the system is implemented with known problems.

Those problems can affect the quality of the data that is collected and stored. Sometimes the entry systems have not made all the necessary changes to collect the needed data. Other times, the entry edits have not been updated to ensure collection of accurate data. In those situations, the feeling is that it is more important to roll out the new product than it is to collect quality data.

Unfortunately, there is a problem with this attitude. After a new product has been in use for 12 to 18 months, it must be evaluated to determine its success. The availability of accurate data concerning the early experience of the product is necessary for management to decide if changes need to be made. Inaccurate or incomplete information could lead to a great product being dropped. Worse yet, the lack of quality information could hide the fact that a new product has serious problems.

Data management is not about a team of people that reviews products just before they are implemented. It is a set of tools and concepts that should be in use during the entire project. Then, quality will be built in at nominal cost instead of being added at a potentially large cost and with delays in implementation.

Mike Freel is a bureau statistics manager at Des Moines, Iowa-based EMC Insurance Group Inc.

Register or login for access to this item and much more

All Digital Insurance content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access