The Industry's Dirty Secret

"Garbage in, garbage everywhere." That's a twist on the old adage, "garbage in, garbage out," courtesy of Firstlogic Corp., a La Crosse, Wis.-based data quality software provider. "We say, 'garbage in, garbage everywhere' because so many systems share data that bad data in one spot can easily propagate across the entire organization," says Chris Colbert, industry marketing director, at Firstlogic.Bad data can also spread across organizations, as David Jokinen discovered when J.P. Morgan Chase & Co. identified him as deceased in its systems-instead of his mother, who passed away in April 2001.

For more than two years-as reported recently in The Wall Street Journal-Jokinen has been trying to correct J.P. Morgan's error and convince mortgage brokers, credit card issuers, car dealers and insurers that he is very much alive and deserving of their products and services.

Indeed, the financial services industry is converging. And as it does, insurers, banks, brokerages and their business partners are exchanging information electronically in real-time to process transactions more quickly, to improve customer service and to reduce costs associated with manual workflow.

"We share data with agents, we share it with people who do claims processing, with banks that offer mortgages, and with risk managers," says Mele Fuller, interface architect, at Seattle-based Safeco Corp. "There are many organizations with whom we share our data-and it's growing." (See "The Industry Standard for Consistent Data," page 24.)

The industry also is implementing customer relationship management (CRM), data warehousing and business intelligence solutions. The intention is to share data enterprisewide, and analyze it to make more informed decisions more quickly in response to market changes and competitive pressures.

Therein lies the conundrum: The financial services industry, which always has been built on data, is becoming even more dependent on data. And as it becomes more dependent on sharing data-often in real time-managing that data effectively becomes not only important-but absolutely necessary.

"Detroit manufactures cars. You can go to the dealership. You can touch them. You can smell them. You can drive them. You can see the deliverable," says George Jablonski, P&C enterprise data architect at The Hartford, Hartford, Conn.

"Insurance, on the other hand, sells promises," he says. "The promises are documented in contracts. And contracts turn into data. And data is stored. Our asset is that data versus a car that you can see. If you understand this analogy, you understand the importance of information as an asset to an insurance company."

A big problem

Yet, despite the fact that data management is the linchpin of insurers' operations, a lot of dirty data lurks in their systems. Mr. Jokinen's tale is one of many stories of a data error gone awry in the financial services world; most go untold.

"The problem is bigger than anyone fully realizes, or is willing to acknowledge," says Ron Barker, insurance practice area leader at Chicago-based Knightsbridge Solutions LLC, a data management consulting firm.

In fact, The Data Warehousing Institute, Seattle, estimates that poor quality customer data costs U.S. businesses a staggering $611 billion per year. This figure doesn't even include the cost of losing customer loyalty by incorrectly addressing letters or failing to recognize a customer who calls or visits a company's Web site (see "The Cost of Dirty Data," page 23).

"People are getting fed up with getting mail with their names scrambled," says Jack Hermansen, CEO of Language Analysis Systems Inc., a Herndon, Va.-based multicultural name recognition software provider. "I received a letter that read, 'Dear Mr. Inc.' I'm sure everybody has stories like that. A lot of people just throw the mail in the garbage and say, 'If this is how much this company cares about treating names, what luck will I have calling them and being treated like an individual?'"

Customers are getting fed up, and the government is putting pressure on companies to manage their data better. The Gramm-Leach Bliley Act (GLBA) and the Health Insurance Portability and Accountability Act (HIPAA) both require insurers to protect the privacy of customer data. Similarly, the recently passed Sarbanes-Oxley law mandates that public companies report accurate financial data-with hefty fines and imprisonment as penalties.

Data quality is a hot topic again, says Tracy Spadola, senior industry consultant at Teradata, a division of NCR Corp., Dayton, Ohio. "I've been working in the field for 20 years. It had its heyday in the 1980s, and it dipped. But it's coming around." With so much more information being captured, shared and scrutinized, companies are asking, "How do we manage it?" she says.

"We're hearing more and more about data quality and data management because it's like a pressure cooker," says William Sinn, vice president of insurance and healthcare marketing at Teradata. "People realize they can't embark on a lot of business initiatives unless they've got good data quality." (See "Cleaning Your Data-And Keeping It Clean," page 38.)

Indeed, insurers are investing in initiatives such as business intelligence, data mining and analytical tools to help them correlate policy, claims, demographic, geographic, and other customer and operational data-and respond more quickly to market pressures.

360-degree view

Allstate Insurance Co., for example, is developing an enterprise CRM program which involves infrastructure modifications, an enterprise customer database, analytics, business rules software and change management.

The objective is to create a 360-degree profile of Allstate's customers-and their households-to assist the Northbrook, Ill.-based company in cross-selling and retaining those policyholders across distribution channels, according to Kimberly Harris, research director, Gartner Inc., Stamford, Conn.

U.S. Risk Insurance Group, a Dallas-based managing general agency (MGA) that distributes excess and surplus lines, also is investing in analytical technology to understand and run its business better. "There's an increasing need to articulate our business plans and to understand our book of business better," says Monte Stringer, executive vice president and CIO of U.S. Risk Insurance Group.

"For an MGA to be successful in this current hard market, that MGA has to have an almost fanatical focus on underwriting," he says.

To that end, U.S. Risk is implementing business intelligence technology from Thazar Inc., a Skywire Software company located in Frisco, Texas. Thazar's software will enable U.S. Risk to determine what business they're producing, where the business is coming from geographically, and from what producers. "The more we know about our business, the better we can perform in the marketplace," Stringer says.

The insurance industry currently is focusing on underwriting results more than in the recent soft market cycle, but the infrastructure in most companies does not support the granularity and level of analysis companies need to truly understand the relationship between risk and costs, according to Tom Chesbrough, executive vice president and founder of Thazar.

Insurers need detailed data about demographics, driving records, vehicles, geography and premium and loss characteristics, he says.

They also need clean data. Data mapping and cleansing is by far the most challenging part of any data mastery project, according to Matthew Josefowicz, senior analyst, at Celent Communications Inc., a Boston-based research and advisory firm. This process typically consumes 80% of the implementation time and resources, and 40% of the overall project from planning to training and maintenance, he notes in a recent Celent report titled, "Insurance Data Mastery Strategies."

A significant portion of U.S. Risk's business intelligence implementation involves testing data quality, Stringer explains. Initially, U.S. Risk Group is creating manual reports, and calculating certain known variables. Then, the team is plugging the same data into the business intelligence system to ensure the data is clean and the results are accurate. "Bad data is worse than no data," he says.

In fact, many insurers became aware of just how dirty their data is when they implemented CRM systems and data warehouses back in the 1990s, says Teradata's Spadola.

"Once insurers began pulling all their data together, chances are, it was the first time they were seeing it all linked," she says. "Instead of their marketing data here and their underwriting data there, it was all pulled together-and that's when many companies realized they had some quality issues."

It's also why many executives are now reluctant to invest in data management solutions, according Knightsbridge's Barker. "A data warehouse alone can cost millions of dollars," he says. "And there are enough data warehouse train wrecks and CRM train wrecks out there that CIOs are reluctant to pony up the money to support these efforts now."

With credibility risks, compliance requirements, and competitive pressures mounting, however, insurance executives realize they can't ignore data quality and data management much longer.

"This is a strategic issue," says Teradata's Spadola. "It's all well and good to say, 'We know we have data problems, and we need to fix them.' But it really requires setting up a formal data stewardship role and putting policies and procedures in place that say, 'We are going to treat our data as a resource, and we're going to manage it effectively.'"

That's precisely what's happening at The Hartford, according to Jablonski. This year, the carrier established an enterprise data unit. And "information/data" is a category unto itself in the company's information technology investment portfolio.

"Establishing this unit signifies that the business folks recognize the importance of data, and that it's a good idea for the management of data to be centralized," Jablonski says. "It will help us in the future to make sure we treat data consistently across the organization."

Such initiatives have come and gone in the past, he says, but this time it's different. "This is a very strong effort. The recognition is there that we want to treat information as an asset-and folks here are doing something about it."

Still, it's not uncommon for companies to view improving data quality as a one-time project. When they bring in a new system, they see that as an opportunity to clean up their data, Firstlogic's Colbert says. "But data quality is an all-the-time thing. Data degrades over time. People move. People get married. Obviously, in the insurance business, people die. These changes have to be dealt with on a consistent basis."

One tool Knightbridge's Barker promotes is a metadata repository. "Metadata is data about data," he says. It describes: What is the data? Where did it come from? What transformations did it go through? What happened to it from the time it was pulled from the source system into the data warehouse? How did it change? "Metadata becomes the key element associated with data quality," he says.

An information architecture approach to data management is also essential, according to Thazar's Chesbrough. His company promotes a centralized data warehouse-rather than having many data marts-to ensure there is "one version of the truth."

"Store once, use many," is a mantra spoken by proponents of centralized data warehouses. "The idea is to start with the data in a single place and build from there," Teradata's Sinn says. "You can keep reusing the data, but why store it in 20 different systems when you can have it in one place and pull it from there?"

One bite at a time

It's important to remember only 10% to 15% of an organization's data is "enterprise" data-data that it relevant across the organization, The Hartford's Jablonski notes. "The other 85% lives in the business 'siloes.'

"Siloes aren't bad," Jablonski says. "Many organizations have been set up with smaller units to be flexible and react to business changes. That's just the nature of the beast." With an enterprise view of data assets, siloes can still operate as they always have. "We want to provide an enterprise view of information without being disruptive to the business areas."

At The Hartford, for instance, there are approximately 50,000 total data elements, and only 500 are likely to be "enterprise" data elements, he says. But the pitfall for many companies is "they try to bake the whole cake. They try to tackle mastering all their data in one huge initiative. That's overwhelming. It's staggering, and people fumble on it."

Companies are wise to "think big, but start small" when implementing data quality solutions, sources say. "You've got to start someplace, so start at a place you think is the worst, or at least an area that you can clean up, and build out from there," says Teradata's Sinn. "It's like the old adage: How do you eat an elephant? One bite at a time."

A few years ago, companies built huge data warehouses from scratch, says Thazar's Chesbrough. "That was very expensive. Now, we're able to implement systems in components-certain lines of business or certain areas such as claims, in phases." This way, an insurer can build confidence in the technology, and prove its worth with short-term benefits and return on investment, he says.

In addition, some relatively inexpensive methods of improving data quality can produce ROI quickly. For example, using an address verification tool can cut costs associated with duplicate mailings almost immediately.

"You can narrow down thousands of data records by simply verifying that an address is valid," says Tho Nguyen, program director in data management strategy for SAS Institute Inc., a Cary, N.C.-based business intelligence and analytics software provider.

"When we compare mailing campaigns after addresses have been verified with previous mailings, we've seen as many as 33% of the names dropped because they were invalid," he says. That translates into significant printing and mailing cost reduction.

Kathy Armstrong, a data quality coordinator at Republic Mortgage Insurance Co., Winston-Salem, N.C., says an automated data auditing tool, which her company purchased from Firstlogic about six months ago, has already doubled her efficiency. Plus, she'll be able to produce more professional management reports, rather than Excel spreadsheets.

Standing apart from competitors is about presentation, consistency and conforming to standards, according to U.S. Risk Group's Stringer.

Much of U.S. Risk's business is written with Lloyds of London, he says. "Every year, when we go to renegotiate our contract with Lloyds, they're looking at our results. They look to us for data. So the more we bring data that is actuarially sound and consistent with ACORD standards, the more credible that data is to them.

"If it's not consistent and it doesn't follow actuarial standards, they don't pay a lot of attention to it," he says.

For reprint and licensing requests for this article, click here.
Analytics Data and information management Policy adminstration
MORE FROM DIGITAL INSURANCE