Completeness is Key When Contemplating Catastrophe

As another hurricane season unfolds, property/casualty insurers can either cross their figures or check their catastrophe models. While the efficacy of the former is set in stone, advances in the latter have occurred steadily.

Indeed, incremental improvements in modeling technologies, coupled with a widespread movement toward analytical rigor, have given carriers a much more granular view of the risks they insure. Currently, an insurer can easily accumulate much more accurate data about where a risk is located, how a risk is built and how it's occupied than ever before. A proliferation of geospatial and other information databases enable insurers to compare third-party information against their own information - all from an underwriters' desktop. This data, in turn, becomes a feedstock for models of ever-increasing complexity developed by firms such as Oakland-based EQECAT Inc., Newark, Calif.-based Risk Management Solutions Inc. and Boston-based AIR Worldwide Corp.

"Compared to just 10 years ago, what is capable in modeling today has really increased on multiple fronts," says John Elbl, catastrophic aggregation manager, North America, for Schaumburg, Ill.-based Zurich North America.

Elbl credits much of this to the inexorable improvement in microprocessors, which, following Moore's Law, have grown in power exponentially, enabling insurers to run models in hours that previously consumed days. "The biggest constraint in the mid-to-late 90s was a lack of computing power," he says, adding that just one of the 23 computers he currently dedicates to modeling may well quadruple the entire amount of computing power present in the whole company in the mid-90s.

Given the constant improvement, it's not surprising that model architecture has not kept up with advances in computing power, Elbl says, but adds that modeling companies are working diligently to address the issue. "My guess is that in a year or two they will solve that."

This is good, because for carriers (not to mention regulators and rating agencies), too much modeling is never enough. "We are always going to be one of those industries that is hungry for more horsepower because the more you have, the more simulations you can run," Elbl says.

BAD DATA

Yet, as the problems associated with a lack of horsepower have largely abated, another potentially grave issue has arisen - the quality and completeness of the data going into the models.

"What's been established by looking at losses that have occurred over the last four to five years is that insurers have databases of exposure information that are somewhat incomplete and inaccurate." says John DeMartini, catastrophe management practice leader at New York-based Towers Perrin.

Elbl concurs that the ability of carriers to model data may now likely supersede their ability to collect it. "One of the biggest constraints that we will come up against is the amount and quality of data that we receive from brokers," he says. "This is an industry-wide situation."

While CAT modeling firms are working on tools to help insurers improve the completeness and quality of their exposure data, others suggest insurers take a more hands on approach, especially when insuring properties in coastal regions.

"The industry has historically made its pricing models based on bad data," says Steve Pietrzak, the president and CEO of Itasca, Ill.-based Millennium Information Services, which pairs property inspection services with an analytic engine. Pietrzak says he sees insurers increasingly relying on on-site inspections over external data sources such as tax records or aerial photos when gauging the square footage of a property. "I've seen companies go to 100% inspection models," he says. "They feel better actually looking at what they are insuring."

With data quality becoming a higher priority among insurers, DeMartini says he sees the situation eventually getting better. "Two-to-three years from now, I foresee significant improvement in the overall information going into the models," he says.

BLACK SWANS

One issue with which many insurers must tussle is the likelihood of losses that are highly improbable, yet do happen - the "black swans" made famous in Nassim Taleb's bestseller. Companies may offset some of these losses through reinsurance, but can't reinsure all potential risks without spending too much of their premium income on reinsurance. Accordingly, even though these events occupy the thin tail of the bell curve, insurers still need to account for them in their models.

While Hurricane Katrina was devastating, there are events out there that could double or triple Katrina in insured losses, DeMartini notes. "In our world, a black swan would be a Category 4 that hits the east coast of Florida and continues up the Atlantic Coast to hit coastal North Carolina and ends up in the Northeast Atlantic states. It's extremely unlikely to occur, but if it did, the industry could be looking at $100 billion in losses," he says.

Alice Gannon, C-Consul consultant at San Diego-based EMB Consultancy LLC, says insurers need look no further than the current financial crisis for another reminder that risks once thought unimaginable, say the bankruptcy of Lehman Brothers, can occur. "The bankers were using models that were not good in the tails," Gannon says.

Fortunately for insurers, there's a wealth of academic research and intellectual capital going into improving the models. However, this causes both insurers and modeling companies to make judgment calls about the reliability of new science. "I make sure to keep up on current scientific articles in order to understand the differences between the current models and the most recent research," Elbl says. "Some of our opinions may differ slightly from the modeling vendors. We will make appropriate adjustments to model outputs to align with what Zurich feels its view of risk is."

THE NEW MODELS

Even the best calibrated model is essentially a lot of educated guesswork in the absence of solid historical data. For good or ill, every hurricane season provides a wealth of raw data to reassess how well a model is performing. "There's still huge room for improvement in the models, but it's going to largely be a function of time," Gannon adds. "Even a hurricane season without any significant landfalls is a good update."

Elbl also says he sees models getting better as data accumulates. "It's a pretty widely held opinion that there are some parts of the model that offer more comfort than others because of recent storm activity or lack thereof," he says. "I think the models are much more accurate now than they were in 2001, before the rash of storms started hitting Florida."

The models are also evolving to reflect other recent lessons. For example, many expect newer models to better account for the collateral effects of disasters, such as the damage Hurricane Ike continued to inflict across the Midwest after initially pummeling the Gulf Coast. The next revisions of the models, expected in 2011, will also likely place greater emphasis on convective storms, such as tornadoes, which have grown in frequency and severity in recent years.

"From an insurance industry standpoint, the updating of the models is both a blessing and a curse," Gannon says, noting that updated models can radically alter short-term projected losses, capital planning and pricing. "That lack of stability can be problematic when you are trying to make plans."

DeMartini concurs that a little stability would be nice, if Mother Nature obliges. "This in an interesting time analytically, and I think the modelers have really stepped up their game-they have a lot of really talented people," he says. "But all of us in the modeling community are hoping for a year or two of stability because it's getting hard to benchmark from one analysis to the next when the models keep moving."

(c) 2009 Insurance Networking News and SourceMedia, Inc. All Rights Reserved.

For reprint and licensing requests for this article, click here.
Security risk Policy adminstration Data security Claims
MORE FROM DIGITAL INSURANCE