The New Data-Driven Insurer
Doing more with less can be exhausting. But as the U.S. economy has stabilized, insurers have begun shifting from cost cutting measures to more ambitious technology projects intended to support competitive initiatives and drive efficiencies, according to industry experts.
The number of U.S. insurers planning to increase technology spending was 56 percent this year, according to insurance research and consultancy Strategy Meets Action's (SMA) "2013 Insurance Ecosystem: Insurer Technology Spending, Drivers, and Projects," compared to 40 percent in 2010. And the emphasis is decidedly on projects intended to confer strategic advantages and differentiation, as 71 percent of the 157 insurers surveyed are shooting for transformation and growth.
Allstate, the nation's largest publicly held personal lines insurer, XL Group, a global P&C, professional and specialty insurer, and American International Group (AIG), a global property/casualty, life and retirement insurer, are shifting gears. They are working smarter by leveraging an asset insurers are awash in - data - to drive better decisions in marketing, pricing and claims.
And they are not alone. According to SMA, 46 percent of insurers said they would increase spending in the front office on technology for marketing. In the middle office, 55 percent said they would increase spending for underwriting and new business. In the back office, 37 percent said they would increase spending on claims/payout. Among the top 10 projects identified were analytics, business intelligence, data warehousing and big data, all intended to help insurers make better use of the data they have, access newly available data and make better, more informed decisions to find opportunities, price them more effectively and reduce losses.
RIGHT PEOPLE, RIGHT MESSAGE, RIGHT TIME
Among P&C insurers, advertising spend now exceeds $4 billion per year, says Pamela Moy, Allstate's VP of marketing analytics, research and administration, and it's unlikely to slow soon, especially as the growth of the P&C market has slowed in recent years. At the same time, computer processing power, third-party data and analytics are maturing and becoming less expensive.
"All of that is making [marketing] analytics truly worthwhile," Moy says. "People used to talk about all this before, but it just wasn't implementable." The rise of digital media, in addition to offering new channels to reach consumers, enables insurers and their marketing partners to collect customer-response data almost instantly, measure and understand the effectiveness of marketing efforts, and quickly adjust them to increase their effectiveness, she says.
"We are using analytics to target the right people with the right message at the right time," Moy says. "We are able to understand the types of quotes we are generating from marketing, as well as the incremental sales we are generating, to ensure that it's financially viable. That type of analysis, as well as the competitive environment is fueling the [advertising] increases," she says. "We have to make sure that we're breaking through with every single dollar that we were spending and that we are reaching the right people."
Allstate has several analytics teams, including those that work on digital and broad-based media. Within the digital realm, Allstate is leveraging many applications and tools, including a data management platform and Addressable Television, which enables Allstate to segment television audiences and deliver specific advertisements and groups of advertisements to specific audiences within a common program. According to Gartner Group, audience segmentation can be done based on geography, demographics and behaviors. It's delivered through cable, satellite, Internet protocol television (IPTV) and set-top boxes.
Moy explains that advertising messages are segmented and delivered to households identified in target profiles, which they developed using data from third-party vendors, including Acxiom and Experian, and an advertiser's database match. Allstate uses an Oracle database and SAS software to sort through the data.
"Let's say I'm trying to reach only renters with a renters' message," Moy says. "There are about 35 million households total that are 'addressable,' and that number is growing. I can take third-party data to find who, out of all those households, are renters. And then, with certain media partners who offer Addressable TV, I can make offers to only those people."
For example, through Addressable Television, a consumer could receive a specifically targeted ad on his television, then go to Allstate's website, where the company can track his activity, online behaviors and interests and combine them to create a profile. This profile could be forwarded to an agent, who could then have a more informed conversation with that consumer.
"That's the Nirvana. The very last step is pretty tricky to do well," she says. "We are not 100 percent there, but that is what we try to get to: Making sure that we can match that sale to that person."
Among the tools Allstate uses is [x+1] Origin, a data management platform (DMP) designed for data-driven digital marketing, and featuring multichannel capabilities. The vendor, [x+1], describes the technology as powered by a real-time decision engine, Web services APIs and advanced analytics. The company says Origin acts as a dashboard for marketing segmentation and campaign management, as well as an integration point for internal and external data, media (such as Google, Rubicon, Trust metrics, Facebook Exchange and others) and third-party data providers for consumer and business targeting. The DMP enables cross-channel execution and cross-channel measurement through integrations with Adobe, epsilon, Kenshoo and others. [x+1] Origin also supports integrations with Oracle and Aprimo customer relationship management applications.
For Web and other messages, geographic targeting plays an important role in the analytics and selection process, Moy says. Because insurance rates and competitive pressures vary by location, Allstate is testing geographic areas to determine what consumers are most attractive to Allstate.
"That group, theoretically, we could spend more money on," she says. "It used to be that we would pick a few targets and they would be pretty broad. Now you can be looking at a certain group to see how they respond and decide whether we want that."
Targeting could be based on Allstate's desire to sell a specific product based on knowledge of that consumer, she explains. "I could be looking at motorcycle enthusiasts because I sell motorcycle insurance. But it also could be a sort of corollary group based on shopping triggers. If I know that people who buy cars also tend to be in the market for insurance, I may be looking for that sort of adjacent target. And we could have 100 different target groups, and I can look at them much more finely," she says.
Allstate works with third parties, such as Acxiom and Experian, to gather the data and uses the DMP to pull it in and perform the analysis. Then, using cookie-level attributes, they examine the overlap of certain targets against different key events and create audience pools based on when individuals visit the website, Moy explains.
"The other thing is multi-touch attribution, which is making sure that you are attributing the benefit of your sales appropriately across the different touch points within online, paid search and platforms. There are analytics techniques to make sure you are doing that correctly."
The common theme among these efforts is the ability to segment consumers and more effectively reach the most desirable through multiple channels. But the tools also enable Allstate to understand quickly what marketing messages are working best and change those that are less effective, Moy says.
"From the time you serve an ad to when someone clicks it is far more tangible and immediate than when we are running a commercial on television. We have very sophisticated ways of understanding whether TV ads are working, but that's not as immediate," as it takes longer to work through the purchase process and to retrieve and analyze the data. "On the digital side, we are able to see that very quickly and make decisions and adjustments and to target more finely. We've been working on this for several years now and are continuously trying to improve it," Moy says.
Regardless of the line of business, consistently assigning the right risk is what makes or breaks an insurer, and it's especially true for those that take on large commercial risks.
XL Group specializes in complex risk assessment. "We don't deal with a lot of volume," explains Kurt Schulenburg, IT leader. "But XL's strategy is to grow the business volume-wise and increase premium, without growing the employee base."
To help achieve that goal, Schulenburg is responsible for helping the company learn from its past underwriting experiences, determining what data impacts future losses for XL's North America and international P&C business and the creation of XL's global underwriting platform. However, not so long ago, there were multiple tools for pricing and risk evaluation and many manual underwriting processes, he says.
"If we wanted to go back and look at all the risks that have shared characteristics, we'd have to assemble a bunch of Excel spreadsheets and e-mails," Schulenburg says. "When you work in a manual environment, all you're able to use is the experience of the underwriter. What we are trying to get to is a combination model, where we use the experience of the underwriter, but we augment that with better analytics and more data from prior years."
By determining, locating and attaching certain policy attributes to loss history, XL is attempting to give underwriters more information and drive better decisions, he says. "If I now capture all the data that goes into a risk evaluation, and I capture it over the course of three years and see how it corresponds to our losses for the last three years, I can start telling them, 'hey, did you know this variable had this effect on loss?' Eight times out of 10, their answer is 'yes,'" he says, but now those variables associated with losses can be identified, tracked and shared in the next iteration of the model, and make the underwriter even better.
The first step, Schulenburg says, was to capture all of the underwriting information, pulling data from claims, underwriting, distribution and pricing applications, and incorporating it into XL's strategy, which involves taking a risk from a broker, evaluating it against XL's appetite, assigning a price and generating a quote. Next, Schulenburg's group created a common workbench that orchestrates that process and tracks where that data is stored. The data then is organized into a dimensional model for access and analysis.
"We take those four systems, and we can get a whole lot of insights: which brokers are sending us the best risks and what attributes indicate a future loss, for example, and we start adjusting our underwriting processes to optimize what risks we go after," Schulenburg says. "We are using our data warehouse infrastructure to build relations amongst all our strategic systems. And that infrastructure now supplies data to our actuarial and strategic analytics groups to further advance our underwriting requirements."
Schulenburg says XL already has identified several "highly available Dunn & Bradstreet-type" attributes that were not previously identified as risks. "When the rest of the industry figures those out, we've lost that advantage. We've only had it for a couple months, and we'd like to keep it a little longer," he says, followed by a laugh.
XL uses FirstBest's Underwriting Workstation for underwriting evaluation, Accenture's Duck Creek policy administration platform for the North America ISO regulated business and Xuber's Genius policy admin for the European business. A homegrown .NET pricing system sits on top. Integrations are through Microsoft's Biztalk, and they use Microsoft Dynamics, a customer relationship management platform, for their distribution system. Data from those applications are loaded into a Microsoft SQL Server database via Informatica. The analytics team then uses SAS Visual Analytics to build relationships and find insights, Schulenburg says.
"Duck Creek is the best system for us for those ISO North America regulated lines. But Genius is still the best platform for international currencies and those things in Europe," Schulenburg say. "Now the policy admin systems are separate from underwriting. That's the innovation. And underwriting, whether you are in Germany or Idaho, is the same. I can have the same information and share it across. But when I have to write a policy and do renewals and figure currencies, that's where I now have two separate systems."
The analytics models currently being implemented for professional lines were built by XL's strategic analytics team and SAS. An actuarial team, led by Chief Actuary Susan Cross, has been responsible for getting the data and for improvements to the actuarial process, Schulenburg says.
"The goal was to provide a more consistent user experience and better data management so my underwriters all have the same experience in terms of how they gather data and the level of quality that comes out of it," Schulenburg says. "If everyone is capturing the same data consistently, when I get to my analytics phase I'm not worried about how someone interpreted a piece of data. Everyone is using the same definitions across products, geographies and XL organizations. It's a lot easier. We have 75 products across every continent. The old way, I could only look at one set at a time; now I'm able to look at the business any way I like."
XL also is using third-party data from providers including Dunn & Bradstreet, the U.S. Census and geocoding from Google Maps and Trillium. "We are finding new ways to use external data, and that's one of the big wins so far," Schulenburg says, and geocoding presents another large opportunity.
"The evolution we're [working toward] is portfolio risk evaluation. Underwriters will see every environmental factor and what other risks XL insures near that location. That's enabled by better-captured data. And we'll keep an inventory of all the locations we insure across all lines of business. An underwriter will be able to see all our exposure, real time, in that area. If you have a building in a volatile area, we might take one, but taking two might be a bad idea. We want to spread out our exposure."
Schulenburg says despite the many advancements in analytics, the company is unlikely to automate underwriting decisions fully. "Data will never be the only answer, but we believe it's a great tool to help the underwriter become more accurate. It's setting us up for constant improvement," Schulenburg says. "We can build systems that are efficient and lower our cost components, but our goal at XL is to pass on savings to our customers," he says. "This is definitely the future of insurance. Anybody who can come up with a nice data source that you can prove is indicative of risk is truly valuable. The key is accuracy and consistency of data, but if you are able to do that, it's a gold mine."
ART AND SCIENCE
As important as it is to generate top-line revenue and adequately assess risk, managing loss ratios is equally important, particularly in a sluggish economy when fraud typically increases.
While fraud detection models are widely used in the credit card industry, where they are run behind virtually every transaction, they are comparatively new for commercial property and casualty insurers, says David Lee, VP of analytic capabilities for AIG's global property insurance arm.
In his nine years with the company, Lee has built a variety of analytics models with SAS Business Analytics for underwriting, financial analysis and other domains. According to SAS, AIG was able to identify six predictors that AIG incorporated into its Web-based quantitative risk model, which the insurer used to create summary risk profiles and enable risk-based business decisions that resulted in $14 million in new, executive liability business, and helped AIG avoid a potential loss of $75 million from certain executive liability accounts.
In separate projects, Lee and the team built an automated reconciliation tool to reduce the reserves required to cover discrepancies in unreconciled payments, allowing AIG and auditors to review historical data and, with a data matching process, locate millions of dollars in unreconciled payments, resulting in a deferred tax credit of $10 million. AIG also used SAS to build models to estimate bad debt reserves based on open balances across multiple lines of business. The methodology and algorithms comply with audit requirements and provide stable exposure estimates each quarter. And more recently, Lee began applying data sciences to fraud detection.
"In insurance, we are much more blind to fraud," Lee says, because the insured frequently is the perpetrator of the fraud and there is no feedback mechanism to the insurer. Credit card companies, as a part of their business process, see both legitimate and fraudulent claims and are able to build models based on a more robust data set, he explains.
In addition, insurance is a much more complicated business, he says. "There are hundreds of perils that we insure for our commercial customers, and each of those could be a distinct product or a bundle of products and would require a model of similar sophistication as the one used by a credit card company." Credit card companies, on the other hand, essentially are looking for delinquency and default. "Insurance companies want to know if their customer is going to get sued for employment discrimination. Or, is their building going to get hit by lightning, or flood due to proximity to a body of water," he says.
Lee says there are number of ways AIG and other insurers are detecting claims fraud through analytics. He first describes a "supervised learning approach," which entails creating an analytical model based on attributes of historical known-fraud cases, created from real-world experiences and historical data, and scoring future claims based on those attributes. An "unsupervised approach" could include social or relationship mapping, in which analysts look for patterns within a social network, such as between claimants, providers and lawyers.
"Hypothetically, if this type of fraud is being perpetuated over and over again by the same individuals, business IDs would pop up more often than you would expect from a legitimate provider," Lee says. "We have built some technology to historically explore the proprietary data sets that we have, which contain claimant's names and addresses, the names of providers, addresses and IDs, and we look at the relationship between providers, claimants and lawyers to see if there are any anomalous relationships. That always results in more investigation, because all fraud detection can do is point you in the direction of a set of cases that might be fraudulent. Then you have to go to more traditional methods to confirm and see if you want to take further action."
Lee says the data scientists have a lot of interaction with subject matter experts and investigators and rely on them for practical knowledge. And, Lee says, they will be working together on a product-by-product basis to assemble databases, composed of internal and external datasets, and review the quality of the resulting referrals.
Lee says that the entire chain of events in the claims process is examined, and that text mining is a popular way to distill information from unstructured notes.
"You look at the frequency of all the words within all the notes and you can find the most popular ones," Lee says. "You might find some words that are indicative. And you can test whether, statistically, those words show up a lot when fraud is actually present. Correlations are everywhere," he says. "If it turns out that correlation is persistent over time, you could build a predictive model off that. Is it the word 'Sunday' in the adjustors' notes? Maybe. Or, if a claim is reported on a Monday? Many people get injured over the weekend. So if they reported a severe injury on a Monday, they might have gotten injured over the weekend. I'm not saying it's that simple, but through text mining, word searches and dates, and other things, across tens of thousands of claims, things start to repeat and you start to find these correlations, which can be used in a predictive setting on a go-forward basis."
Lee says the models will enable AIG eventually to go through all claims, vs. having seasoned veterans work through a handful. "We believe they will be pointing our investigative units at a more target-rich pool of referrals," Lee says. "The models so far are well received and the investigators say the referrals are good."