The insurance industry has spent decades obsessing over the big one — the Category 5 hurricane, the 9.0 earthquake, the once-in-a-generation catastrophe that keeps reinsurance executives awake at night. We've built entire capital structures, modeling ecosystems, and regulatory frameworks around these so-called primary perils. And while we were watching the horizon for the monster storm, the cumulative weight of smaller, more frequent climate events quietly became the larger problem.
In 2025, insured losses from non-peak perils — severe convective storms, wildfires, flooding, hail — reached approximately $98 billion, making it
Uninsured secondary losses grow
And yet, the protection gap keeps widening.
Here's why. The
This mismatch between where losses are actually occurring and where our modeling capability is strongest creates a dangerous feedback loop. Insurers can't price what they can't model. What they can't price, they can't underwrite. What they can't underwrite becomes someone else's problem — usually the homeowner, the farmer, or the small business owner who discovers after the fact that their coverage had gaps they never understood.
The Los Angeles wildfires in January 2025, which produced an
Finding the coverage gaps
Closing this gap requires a fundamentally different approach to risk assessment. The next generation of catastrophe models needs to move beyond historical loss distributions and incorporate the environmental variables that actually drive secondary peril behavior — real-time satellite imagery, high-resolution terrain and vegetation data, soil moisture conditions, fuel connectivity mapping, and machine learning techniques that can identify non-linear relationships across dozens of interacting features. The data exists. The computational power exists. What's been missing is the willingness to build from scratch rather than patch legacy systems that were never designed for this problem.
But better models alone aren't enough. The industry needs a structural shift in how it thinks about these risks. Secondary perils can't remain an afterthought bolted onto catastrophe programs designed around hurricanes. They need dedicated capacity, modeling, and underwriting expertise. Parametric structures — which pay based on measured physical conditions rather than adjusted loss assessments — deserve particular attention here, because they eliminate the lengthy claims processes that often delay recovery from frequent, moderate-severity events by months.
There's also a talent problem. The actuarial and underwriting workforce was trained on a world where catastrophe meant hurricane season and earthquake zones. The skill set required to underwrite wildfire risk in the wildland-urban interface, or to price severe convective storm exposure at the parcel level, looks more like data science and earth observation than traditional actuarial work. The companies and carriers that recognize this shift and invest accordingly will be the ones capable of writing the coverage the market desperately needs.
The protection gap isn't a natural phenomenon. It's a choice — the accumulated result of an industry that built its infrastructure around a specific type of catastrophe and hasn't adapted fast enough as the risk landscape shifted beneath it. Every year of $100-billion-plus secondary peril losses is another year that millions of businesses and communities absorb financial damage that should have been transferred.
We have the data. We have the computational power. We have the financial structures. What we need is the will to deploy them against the risks that are actually driving losses today — not the ones we've been preparing for since 1992.










