InsureThink

AI is transforming proof of insurance

Person working on a laptop with the letters AI floating above the keyboard.
Adobe Stock.

For a decade or more, conversations about artificial intelligence in insurance underwriting focused on a central question: Can AI be accurate enough to trust?

Processing Content

Today, that question is largely settled. Modern AI systems can outperform traditional actuarial and rules-based approaches across a wide range of underwriting use cases. They identify patterns humans miss, incorporate broader data sources, and continuously learn from outcomes. In terms of pure predictive power, the technology has arrived.

And yet, AI adoption in insurance remains uneven and cautious. The reason has little to do with accuracy—and everything to do with explainability.

Arriving at this conclusion requires nothing more than simple process of elimination—an "if not this, then that" bolstered by three words: Insurers are smart. Insurers have always been open to better predictions, and at its core underwriting is about assessing risk as accurately as possible. If a shiny new cyber-toy prices risk more precisely, improves loss ratios and enhances portfolio performance, then carriers want that solution because it helps them perform their jobs better.

But of course, accuracy alone has never been sufficient in insurance. So, while the benefits of AI tools are clear, their utilization in certain processes has become a sector-specific sticking point. With insurance operating under intense regulatory scrutiny, every underwriting decision must be defensible—not just internally, but to regulators, auditors, reinsurers, and even courts. Insurers don't just need to make the right decision; they need to explain why that decision was made.

That requirement has shaped underwriting for decades. Often, traditional statistical models and rating factors persist not because they are the most accurate, but because they are understandable. Underwriters can trace a decision back to a handful of variables, and regulators can follow the logic step by step.

Unfortunately, for years that dynamic created a false tradeoff in the market, one pitting explainability against performance rather than alongside it. It's a presumption worth puncturing—so let's do precisely that.

"Chipping away at cement" — The false divide between transparency and performance

While certainly effective–transformative, in fact–first-wave machine learning models often failed to "show their work." While they were deft at producing data-driven scores and recommendations, they rarely provided meaningful insight into how those conclusions were reached.

For underwriters, that was frustrating. For compliance teams, it was unacceptable. And as a result, many insurers made the understandable choice of accepting less accuracy in exchange for greater transparency. Even when insurers with this mindset did adopt AI solutions, they did so through this limiting lens; models that were "good enough" but explainable were preferred over more powerful approaches that, insurers ascertained, could not be concisely justified.

This wasn't resistance to innovation, but rather adherence to good governance. Underwriters need to understand how a recommendation was formed so they can trust it and act upon it. Regulators need clarity on methodology, bias controls and decision logic. Executives need confidence that AI-driven decisions align with company policy and risk appetite. Without explainability, even the most accurate insurance industry AI models are only "getting it half-right."

Even obsolete perceptions risk becoming permanent when left unaddressed—the cement sets, so to speak. As the capabilities of AI solutions in the insurance space advance dramatically, the misconception that their adoption continues to come at the expense of explainability must be first chipped away at, then smashed entirely.

Part of clearing the way for bigger, better innovative tools is clearing away the ingrained biases against those solutions' predecessors. The path will truly open up once the industry realizes that today's more comprehensive AI solutions deliver both superior accuracy and transparent, regulator-ready explanations.

A collective "Eureka" moment: Explainable AI and adoption acceleration

Recent advances in AI, and particularly the use of large language model techniques layered atop advanced analytics, are transforming what explainability looks like in practice. As these substantial improvements become increasingly evident to an ever-wider swath of decision-makers, a collective Eureka moment is fast approaching—one with the potential to dramatically accelerate AI adoption.

Rather than forcing underwriters to interpret raw model outputs, modern systems can translate complex patterns into plain-language reasoning. Instead of merely flagging a risk, AI can now articulate the drivers behind that risk in terms that align with underwriting judgment.

For example, in healthcare underwriting, today's most sophisticated platforms can provide analyses that previously required manual nurse review. Underwriters are afforded not only a decision, but also a clear explanation of the contributing factors—the conditions, utilization patterns, or cost drivers that influenced the outcome.

This shift is critical. Explainability is no longer about exposing every mathematical detail of a model. It's about making the decision logic understandable, defensible, and actionable for the people who rely on it.

When underwriters understand why the model reached a conclusion, confidence rises. When regulators can see how critical considerations are measured and mitigated, resistance falls. And once everyone's guard deservedly drops, the benefits unlocked by modern AI solutions can truly be realized, reinforced and satisfactorily substantiated.

For instance, explainability plays a central role in addressing one of the industry's most sensitive concerns: bias. Insurers must be able to demonstrate not only that their models perform well, but that they do so fairly. This requires visibility into training data, model behavior across populations, and outcomes over time.

Such needs are best served by AI solutions that operate across multiple carriers and geographies—pools that are both broad and deep enough to evaluate bias more comprehensively than any single insurer could. This cross-market perspective allows potential issues to be identified early, proven absent when appropriate, or corrected when necessary.

Such attributes are essential for long-term trust. AI that cannot explain how it avoids bias—or how bias is detected and eliminated—will fall short in the eyes of regulatory authorities regardless of performance. Here, explainability turns AI from a perceived risk into a governance asset.

Real-world impact: Where explainable AI excels

The value of explainable AI becomes especially clear in complex underwriting scenarios where human intuition struggles. Small commercial P&C policies are a strong example. More than 98% never experience a loss, yet a small subset accounts for the majority of claims. Identifying that subset is extremely difficult using traditional approaches.

In such scenarios, AI can analyze subtle combinations of factors that humans would likely overlook, grouping policies by their likelihood of future loss. Importantly, explainable AI can also articulate what those factors are, allowing underwriters to validate and act upon the insight rather than blindly accepting a score.

The same applies in small-group healthcare underwriting, where a single high-cost individual can dramatically impact profitability. Advanced models can detect early signals of that risk—markers traditional methods might miss—while still explaining the drivers in ways that align with underwriting judgment.

In both cases, adoption depends less upon raw accuracy than on whether underwriters can understand and defend the recommendation. That means AI models must align with—and speak the language of—regulatory expectations, underwriting workflows, and enterprise risk management practices. It means offering transparency that builds confidence rather than demanding blind trust.

The future of AI-driven underwriting is not about replacing human judgment. It's about augmenting it with systems that can explain themselves as clearly as they predict outcomes. When that balance is achieved, the traditional tradeoff between explainability and performance disappears—as does the longstanding yet outdated misconception that AI models sacrifice one for the other. That belief is, quite simply, no longer true.

The technology is ready. The data is available. The models are proven. As explainable AI matures, insurers no longer must choose between what works and what can be explained.


For reprint and licensing requests for this article, click here.
Artificial intelligence Underwriting Insurance
MORE FROM DIGITAL INSURANCE