Building a data warehouse at National Interstate
When Scott Noerr joined National Interstate Insurance as CIO almost two years ago, his biggest challenge could be summed up in one word: data. And he was up for the challenge. “It’s one of the reasons why I came here,” says the insurance tech vet and self-described “change agent.”
National Interstate offers both traditional and specialty insurance products to the transportation industry. Noerr’s primary assignment was to untangle the company’s web of detailed product data, data input systems and insurance data formats to create less manual, iterative and labor-intensive workflows. The end result would be faster underwriting process and risk management decisions. Eventually, this will allow the carrier to leverage such data-intensive efforts as artificial intelligence and the Internet of Things (IoT).
When Noerr joined the company, he saw “a lot of fragmented, disparate systems," which had led to “a lack of information shared across the enterprise.” The company had multiple underwriting platforms, for example, with data stored in five different systems, requiring employees to type in information multiple times. “Same thing in claims,” he adds.
So Noerr began looking to reduce redundancies and increase the quality of data the company stores, he says, by implementing a “robust enterprise data warehouse.” A data warehouse stores more kinds of data, and in a format more applicable to analysis and mining, than a transaction-oriented database.
Founded in 1989, National Interstate has grown both through expanding its product portfolio and through mergers and acquisitions. Headquartered in Ohio, with operations in Hawaii, and Missouri, the company has more than 600 employees and offers more than 30 products. The tech effort is not an end in itself, Noerr is quick to point out. It’s in service of the company’s “real business drivers,” he says: “customer satisfaction, ease of doing business, and operational excellence.”
National Interstate had tried before, and come up short, in building a data warehouse, first using Microsoft’s business intelligence tools and then an open-source data analysis toolset called Pentaho. So Noerr brought in Informatica’s PowerCenter—a “traditional ETL (extract, transform, load) tool” for doing “extraction of raw data and loading it into the data warehouse,” says Peter Ku, financial services strategist at Informatica, a data management technology vendor.
Right now, National Interstate is using PowerCenter to extract data from systems in “three subject areas: claims, policy, and general liability,” Noerr says. He intends to “source data from all operational systems,” including agency management systems, he says. “We have gaps in some data because it stays in a file versus in [the data warehouse] where it can be mined for better underwriting,” Noerr says. He’s kept Microsoft SQL Server as the underlying database technology.
Going forward, Noerr plans to introduce master data management technology into the data warehouse. A master data management system standardizes and reconciles data from varying sources, which is particularly valuable to organizations like National Interstate with long, detailed product lists. That project will follow on a project to implement early next year two applications from Salesforce.com, both of which are in pilot phase right now.
Longer term, Noerr says he’s interested in “third-party data sources for underwriting—some static, some with telematics.” He’s testing a telematics device that he hopes will let National Interstate “understand specific risks associated with a driver that turn into losses,” such as speeding. Right now the telematics device relays data in batch mode only, as his systems are still unable to “parse streams with high-volume data,” he says.
The challenge, Noerr says, is “How do we take in that information in real-time in order for us to increase our underwriting capabilities?” Not surprisingly, he’s up for it.
A higher standard
One thorny problem concerns the use of ACORD standards. Over many years, the nonprofit standards organization has worked to develop data interchange formats and tools to aid the insurance industry, including ACORD AL3 and ACORD XML, its platform for Internet communication. These standards, widely used, have helped facilitate the flow of data among the various constituencies involved in insurance transactions.
The problem revolves around too much of a good thing. “ACORD puts out new releases on a regular basis,” informatica's Ku says. The various iterations mean insurance constituents, from underwriters to agencies, often find themselves using slightly different versions of the ACORD technology, potentially leading to “errors in conversion” and “operational delays,” Ku says.
This happened at National Interstate. To help ameliorate such data disparity, Noerr researched data integration platforms, including Informatica’s B2B Data Exchange system. It offered “pre-built libraries that help parse Accord and AL3 files,” he says.
“We’re ACORD-certified,” confirms Ku. The B2B Data Exchange acts “as an ACORD data exchange hub,” he says.
It took about three months to implement the system. Now, a workflow that took on average two days to complete—processing a file from agent to underwriting platform—has been “drastically reduced, ” Noerr says, from days to seconds, in some instances, significantly increasing “customer satisfaction, cycle time, and ease of doing business with agencies.”
“A file of information that comes to us in ACORD, we can process it automatically,” says Noerr. It flows “right into underwriting,” he says, “and when we get that file we don’t have to build something specific to 150 versions of ACORD.”