InsureThink

Frequent climate events are quietly becoming a larger problem

Panoramic view of a flooded town street.
Adobe Stock.

The insurance industry has spent decades obsessing over the big one — the Category 5 hurricane, the 9.0 earthquake, the once-in-a-generation catastrophe that keeps reinsurance executives awake at night. We've built entire capital structures, modeling ecosystems, and regulatory frameworks around these so-called primary perils. And while we were watching the horizon for the monster storm, the cumulative weight of smaller, more frequent climate events quietly became the larger problem.

Processing Content

In 2025, insured losses from non-peak perils — severe convective storms, wildfires, flooding, hail — reached approximately $98 billion, making it the costliest year on record for secondary perils according to Munich Re. That figure represented the vast majority of the $108 billion in total insured natural catastrophe losses for the year. This wasn't an anomaly. For seven consecutive years, secondary perils have outpaced primary perils in driving aggregate insured losses globally. The pattern is no longer a pattern — it's the new baseline.

Uninsured secondary losses grow

And yet, the protection gap keeps widening. Swiss Re estimated that 57% of total economic losses from natural catastrophes went uninsured in 2024, a gap exceeding $181 billion. The uncomfortable truth is that the perils growing fastest are precisely the ones our industry is least equipped to cover.

Here's why. The traditional catastrophe modeling infrastructure — the vendor models that the industry has relied on for decades — was purpose-built for peak perils. Hurricanes and earthquakes have deep historical datasets, well-understood physical dynamics, and decades of actuarial refinement behind them. Secondary perils have none of these advantages. Severe convective storms are hyper-localized, wildfire behavior is driven by dozens of interacting variables from soil moisture to fuel connectivity, and flooding risk changes year to year as development patterns shift. The standard models simply weren't designed to capture this complexity, and their track records on secondary perils reflect it.

This mismatch between where losses are actually occurring and where our modeling capability is strongest creates a dangerous feedback loop. Insurers can't price what they can't model. What they can't price, they can't underwrite. What they can't underwrite becomes someone else's problem — usually the homeowner, the farmer, or the small business owner who discovers after the fact that their coverage had gaps they never understood.

The Los Angeles wildfires in January 2025, which produced an estimated $40 billion in insured losses, illustrated this dynamic vividly. Traditional capacity pulled back from wildfire exposure in the years prior, leaving thousands of California property owners either uninsured or underinsured at the worst possible moment. The event wasn't unforeseeable — it was unmodeled by the tools most carriers relied on.

Finding the coverage gaps

Closing this gap requires a fundamentally different approach to risk assessment. The next generation of catastrophe models needs to move beyond historical loss distributions and incorporate the environmental variables that actually drive secondary peril behavior — real-time satellite imagery, high-resolution terrain and vegetation data, soil moisture conditions, fuel connectivity mapping, and machine learning techniques that can identify non-linear relationships across dozens of interacting features. The data exists. The computational power exists. What's been missing is the willingness to build from scratch rather than patch legacy systems that were never designed for this problem.

But better models alone aren't enough. The industry needs a structural shift in how it thinks about these risks. Secondary perils can't remain an afterthought bolted onto catastrophe programs designed around hurricanes. They need dedicated capacity, modeling, and underwriting expertise. Parametric structures — which pay based on measured physical conditions rather than adjusted loss assessments — deserve particular attention here, because they eliminate the lengthy claims processes that often delay recovery from frequent, moderate-severity events by months.

There's also a talent problem. The actuarial and underwriting workforce was trained on a world where catastrophe meant hurricane season and earthquake zones. The skill set required to underwrite wildfire risk in the wildland-urban interface, or to price severe convective storm exposure at the parcel level, looks more like data science and earth observation than traditional actuarial work. The companies and carriers that recognize this shift and invest accordingly will be the ones capable of writing the coverage the market desperately needs.

The protection gap isn't a natural phenomenon. It's a choice — the accumulated result of an industry that built its infrastructure around a specific type of catastrophe and hasn't adapted fast enough as the risk landscape shifted beneath it. Every year of $100-billion-plus secondary peril losses is another year that millions of businesses and communities absorb financial damage that should have been transferred.

We have the data. We have the computational power. We have the financial structures. What we need is the will to deploy them against the risks that are actually driving losses today — not the ones we've been preparing for since 1992.


For reprint and licensing requests for this article, click here.
Claims Artificial intelligence Insurance Climate change Risk management
MORE FROM DIGITAL INSURANCE