How do insurers leverage AI in climate-related claims?

AI-generated illustration showing aerial view of a flood-ravaged village with submerged houses.
Virtual Art Studio - stock.adobe.com

Editor's Note: This is part of a series that examines the use of AI in the claims space.

Natural catastrophes caused over $137 billion in global insured losses in 2024, according to Swiss Re estimates, with Hurricanes Helene and Milton, severe convective storms in the United States and destructive floods accounting for most of the losses.

Digital Insurance interviewed Somesh Mukherje, vice president of solution architecture at ACORD Solutions Group, to learn more about how AI is used throughout the process for weather-related claims. 

Mukherje is responsible for developing AI-enabled solutions which target some of the most complex challenges that the ACORD community and the global reinsurance industry encounter daily while serving their customers.

Somesh Mukherje, VP of solution architecture at ACORD Solutions Group.
ACORD Solutions Group

How is AI being used in weather-related losses and claims processes, and what types of AI technologies are most commonly employed?

The insurance industry is now able to leverage technology and capabilities enabled by AI towards improving underwriting process efficiencies and effectively fast-tracking claims management. Toolsets employing multi-modal, vision-aware AI models are being widely used to analyze available imagery for conducting automated assessments during underwriting, as well as to auto-analyze damages during claims processing – without a human in the loop, in several instances.

As more devices such as vehicles, machinery, tools and appliances, or sensors embedded within them, come online and are able to mine and transmit data from the edge, insurers now have the opportunity to use AI for automated analysis and deriving insights offline, near real-time–or sometimes real-time–to better analyze consumer risk profiles, offer incentivized premium plans, and most importantly, effectively predict an impending incident or proactively respond before an incident occurs.

There are AI-native solutions that leverage advanced natural language processing  (NLP) models and large language models (LLM) to assist underwriters by fast-tracking analysis of years of loss history and pages worth of supplemental information; this helps to derive meaningful, accurate insights and reach critical underwriting decisions in minutes.

How are insurers training these AI models?

Actual historical data (pertaining to claims, policy and loss history), third-party data sources (GPS or satellite, some publicly available), as well as synthetically curated datasets for use-cases where there is a scarcity of real data, all contribute to the training corpuses which are being leveraged to train or tune the AI models. The techniques applied to train or tune the models depend on the underlying architecture of the machine learning model.

Has AI improved claim accuracy and speed? Customer satisfaction?

With the right guardrails and human-in-the-loop strategies, AI can help improve both accuracy and velocity of claims handling. AI-enabled tools can automate assessment of risk and damages incurred with better accuracy, as well as predict or proactively respond to incidents, in some cases in real time! This enables incentivized premium offerings, faster decision-making at the time of underwriting, and fast-tracked triaging of posted claims. This in turn has a direct, positive impact on customer satisfaction.

What are the challenges or limitations in applying AI to weather-related insurance claims?

As with any rapidly evolving technological capability, there are areas of concern that require diligent design considerations when implementing or onboarding AI capabilities, including application of human-in-the-loop protocols.

The industry has seen rapid maturity in the price-to-performance and availability of enterprise-grade security guardrails to protect the privacy and sanctity of data used in training, or shared for analysis or predictions via third-party AI tools and hosted models. However, challenges and regulatory concerns still remain around the explainability of AI model inferencing. Factors such as quality or scarcity of training and tuning data can hinder the explainability of an AI model's inferences; many AI models, especially neural network-based deep learning ones like LLMs, are inherently black boxes.

Tools like ACORD Transcriber help mitigate this issue by enabling surrogate, in-house trained insurance domain models where appropriate, feeding concise and tailored context to restrict hallucinations, and applying systematic evaluations to generate model confidence scores for every insight or inference generated by the model. These confidence scores can be used by the underwriters or claim processing agents to flag certain AI produced outcomes for human-in-the-loop review. The intent should be to automate or implement passthrough processing where possible, and verify as necessary.

How are regulations shaping the use of AI in claims?

The integration of AI in claims processing is being significantly shaped by evolving regulatory frameworks, with the European Union's (EU) AI Act serving as a foundational global precedent. This comprehensive legislation employs a risk-based approach, classifying AI systems into categories: unacceptable, high, limited and minimal risk. The stringency of regulatory obligations directly correlates with the assigned risk level, aiming to ensure AI safety, promote transparency and foster trustworthy AI innovation.

The EU AI Act's influence extends beyond its direct jurisdiction, impacting the development and deployment of AI tools globally as other regulatory bodies consider similar frameworks. Notably, AI systems that facilitate automated decision-making in critical areas such as insurance underwriting and claims administration are categorized within the highest risk bracket under the EU AI Act. This classification necessitates adherence to rigorous technical specifications. Key requirements for these high-risk AI applications in claims include:

  • Explainability: The ability to provide clear and understandable justifications for AI-driven decisions.
  • Bias mitigation: Proactive measures to identify and eliminate biases in both training datasets and model predictions to ensure fair and equitable outcomes.
  • Detailed logging: Comprehensive maintenance of application logs to record model inferences, enabling auditing and accountability.

These regulatory requirements are fundamentally reshaping how AI tools are designed, developed, and integrated into underwriting or claims operations.

For reprint and licensing requests for this article, click here.
Artificial intelligence Claims Weather risk Weather and Climate Change Risk Climate change
MORE FROM DIGITAL INSURANCE