InsureThink

How geospatial data is changing underwriting

Aerial image, storm damage
Hurricane damage over Goodland, Florida.
Jon/Adobe Stock

Ask most carriers whether they use geospatial data in underwriting and the answer is yes. Exposure to wildfire, flood, convective storm, or hurricane is shaped by dozens of location-specific factors — terrain, vegetation, proximity to hazard sources, local infrastructure, and more. But geospatial data is not a fixed thing and what the industry has historically worked with is not the same as what's now available. 

Processing Content

Advances in the availability and quality of commercial earth observation data have unlocked new analytics that are moving the industry beyond static, point-in-time understandings of risk to a dynamic picture of how conditions on the ground are evolving, from seasonal vegetation shifts to post-event change detection. 

But the availability of these analytics means little if organizations are unable to integrate them into their workflows, which can require change to long-standing business practices. Moreover, the application of better risk insights should be pursued with the purpose of expanding a carrier's book of business, not a justification for further exclusions. 

As the severity and frequency of natural catastrophes continues to strain carriers, both challenges — adoption and intent — will determine whether this moment in geospatial analytics represents a genuine advance for the industry.

Three shifts worth understanding

The changes in commercial Earth observation over the last several years break down into three areas, each with distinct implications for underwriting.

The first is resolution. Legacy Earth observation data operates at 10 to 30 meters per pixel, covering more area than most residential properties. You can map land cover and flood zones at that scale, but you can't assess a specific property. The commercial availability of sub-meter imagery from aerial platforms changes that fundamentally. Roof condition scoring, debris detection, and defensible space measurements become possible at scale without a physical inspection. 

The second is cadence. Traditional catastrophe models rely on exposure datasets that may be updated once a year, if that. Consistent, daily satellite revisit cycle opens something different: the ability to accurately track how a property's risk profile changes over time. Vegetation encroaches, roofs deteriorate, and new structures appear. 

But building reliable change detection models is a technically challenging endeavor. It requires more than frequent revisit rates. Each image needs to be captured under consistent conditions. Variations in sun angle, atmospheric haze, sensor calibration drift, and viewing geometry between passes can all produce apparent changes that are not real, or mask changes that are.

This is where constellation design becomes critical. Systems built for time-series analysis prioritize consistency in sensor configuration, orbit, illumination, and viewing geometry across passes. Imaging the same areas at the same time each day from the same angle reduces inter-pass variability and improves the reliability of change detection.

For an insurer tracking how a property's susceptibility to wildfire evolves across a policy lifecycle, that consistency is a pre-requisite for analysis that can be trusted.

The third is spectral diversity. What lies beyond the visible spectrum holds data that can tell underwriters the specific factors driving changes in risk.

Near-infrared (NIR) is sensitive to the cellular structure of healthy vegetation. A property surrounded by lush-looking greenery in visible imagery may show pronounced stress signatures that are only apparent in NIR. 

For flood, the same shortwave infrared (SWIR) bands that measure soil moisture are critical for distinguishing permeable surfaces such as soil and vegetation, from impermeable ones like concrete and asphalt. That distinction drives surface water runoff modeling at the parcel level, informing localized flood risk assessments that aggregate flood zone designations miss entirely.

Some of these signals have existed in scientific datasets for decades. What has changed is their availability at the resolution, revisit frequency, and commercial accessibility required for operational risk analytics.

From better data to better decisions

Better underlying data does not automatically produce better underwriting decisions. That adoption gap is where most geospatial solutions live or die.

Successful adoption depends on three things working together. 

First, the data must be delivered fast enough to support a decision at the point of bind — a score that arrives after the policy is written isn't an underwriting tool, it's a reporting tool.

Second, it must reach underwriters in an accessible format inside the decision-making workflow. The insurance graveyard of geospatial innovation is littered with analytics products that failed to meet carriers' workflow requirements.

Third, and most important, it must be explainable. When a carrier declines to write a risk or applies a surcharge based on a geospatial signal, it needs to answer: what was observed, where, when, and why it is material to the risk in question. That chain of reasoning can only be achieved if explainability is built into the analytics from the ground up — not as an afterthought. A model that can identify when risk changes and trace that shift back to the specific signal that produced it is a fundamentally different product from one that cannot.

Precision should be a path to yes

Data precision is only as useful as the question it seeks to answer. If the question is, "How do we reduce our exposure?" The result will be more precise non-renewals and the insurable market shrinks.The people on the receiving end of that are homeowners in wildfire corridors who have watched premiums triple or policies disappear. Small businesses in flood-prone areas that can't get commercial coverage at any price. This is the human consequence of using precision to refine exclusion rather than to expand possibility.

It's worth being clear: carriers want to write business, and growth is the goal. Underwriting in high-hazard environments is hard not because carriers don't want those customers, but because the data hasn't historically been precise enough to price the risk accurately. The result has been broad exclusions that aren't a reflection of carrier intent so much as the limits of available information.

The same data that produces a non-renewal letter could change that calculus. Property-level resolution allows identification of why a parcel is high risk — which trees create the highest ignition exposure, where defensible space is insufficient, and the presence of more flammable building materials. 

It moves beyond a reason to decline and becomes a roadmap for mitigation, a basis for conditional coverage, and a foundation for parametric products tied to the actual risk profile of that location. It creates a path to writing business in areas that generalized models treat as write-offs, and to growing a book in markets where competitors are still retreating.

The goal is to improve risk management and expand access to more customers in markets that better data now makes possible.


For reprint and licensing requests for this article, click here.
Insurtech Underwriting Data management
MORE FROM DIGITAL INSURANCE
Load More