InsureThink

AI compliance issues and legal liability risks for insurers

Person working on a laptop with the letters AI rising above the keyboard.
Adobe Stock.

In January 2026, the New York Department of Financial Services hit insurers with over $82 million in fines. That same month, Georgia slapped 22 carriers with $25 million in total penalties for parity violations. Meanwhile, Colorado has carved out its own AI regulatory framework with requirements that go far beyond anything proposed by the NAIC (National Association of Insurance Commissioners).

Processing Content

If you're deploying AI in your insurance operations and you can't explain exactly how it makes decisions, you aren't innovating. You're building a legal liability.

I've spent over 20 years helping insurers navigate technology pivots. I've seen plenty of trends come and go, but the regulatory response to AI is different. It's faster, more aggressive, and more fragmented than anything the industry has ever faced. And frankly, most carriers aren't ready.

The regulatory landscape is shifting faster than you think

The NAIC released its Model Bulletin on AI in December 2023, setting baseline expectations for AI governance. Fifteen months later, only 24 out of 50 states have adopted it – and many of those did so with their own "special sauce" of modifications and interpretations (NAIC Adoption Tracker, Q1 2026). This is what makes AI compliance in the U.S. market so treacherous: there is no single standard.

  • Colorado (SB 21-169) requires insurers to test AI systems for unfair discrimination and report results annually.
  • Virginia swapped the NAIC's language from "mitigating risk" to "eliminating risk" — a one-word change that turns a "best effort" into an absolute mandate.
  • New York's Circular Letter No. 1 requires insurers to prove their algorithms aren't producing discriminatory outcomes, complete with specific documentation hurdles.

According to RegEd, the insurance industry faces over 3,300 regulatory changes a year. An increasing slice of that pie is dedicated specifically to AI and automated decision-making. These penalties aren't theoretical; they are being enforced with teeth.
Here's the uncomfortable truth: If a regulator asks why your AI denied a claim or hiked a premium, and your answer is "the model decided," you have a problem. A very expensive problem.

The "Black Box" Trap

According to Deloitte's 2025 Global Insurance Outlook, 82% of insurers are now leveraging Generative AI. However, there is a critical "oversight gap."

Most insurance AI rollouts follow a predictable pattern. A team builds or buys a model. It performs beautifully in testing. It goes into production. And then someone asks: "How does it actually make decisions?" The room goes silent.

This is the Black Box Trap. It isn't just a compliance issue — it's a business risk. When your underwriting model can't explain why it priced a policy at a certain tier, you can't defend that price to a regulator. When your claims system can't justify why it flagged a file as suspicious, you can't justify the delay to the policyholder. When your fraud algorithm can't prove it isn't "redlining" protected groups, you are one audit away from a class-action lawsuit.

The State of AI in Business 2025 report revealed that 95% of organizations are not seeing a return on their AI spend. I'd argue a huge part of that failure stems from deploying AI without the governance infrastructure to sustain it.

What "explainable AI" actually means in insurance

When I talk about explainable AI, I'm not talking about dumbing down your models. I'm talking about building systems that can answer three specific questions at any given moment:

  1. What data did the model use to reach this decision? It's not just about a list of inputs. It's about proving data sources are compliant and unbiased across state lines. Privacy laws in California aren't the same as in Texas. Fair lending rules in New York apply differently to auto than they do to property. Your system needs to know the difference.
  2. Why did the model reach this specific conclusion? A "confidence score" is not an explanation. A probability is not a justification. Regulators want to see the chain of reasoning — which factors carried the most weight, how they interacted, and whether the outcome would change if a protected characteristic were removed.
  3. Who changed what, and when? Every rule tweak, model update, and parameter adjustment needs a timestamp, an author, and an impact assessment. The NAIC Model Bulletin explicitly calls for governance that includes "documentation of AI systems, including their intended purpose, inputs, and decision-making processes." Without an audit trail, you have no proof of oversight.

Building compliance into the architecture

The carriers getting this right don't "bolt on" compliance after the fact. They bake it into the architecture from day one.

Principle 1: Separate business logic from code. When your underwriting logic is hard-coded, every change requires a developer, a release cycle, and regression testing across 50 jurisdictions. This makes auditability nearly impossible. According to the PwC 2025 Insurance Technology Survey, 70–80% of IT budgets are swallowed by legacy maintenance, leaving scraps for governance. External rule engines solve this; compliance officers can update state-specific rules without touching code, and every change is logged with full context.

Principle 2: Jurisdictional awareness. Your AI needs to know that a pricing decision in a "file and use" state (like Illinois) requires different documentation than in a "prior approval" state (like New York). According to Milliman, the time to get homeowners' rates approved in NY jumped from 62 days in 2023 to 233 days in 2025. If your system can't automate jurisdiction-specific documentation, you're either wasting resources or missing critical filings.

Principle 3: Pre-deployment impact analysis. Before any AI model or rule change goes live, you should know exactly which products in which states will be impacted. No surprises, no emergency patches, and no "we didn't realize this would affect Florida homeowners" moments.

The competitive advantage no one talks about

Most insurers treat compliance as a "cost of doing business." That's a mistake. It is a competitive advantage.

Insurers who can prove explainability and auditability move through regulatory filings faster. They enter new states with confidence rather than caution. They launch products in weeks, not months, because their oversight infrastructure is already in place.

And then there's the business case that doesn't show up in compliance budgets: Trust. Agents who understand how their AI tools work are more likely to use them. Policyholders who get clear explanations for decisions are less likely to complain. Regulators who see a robust governance framework are less likely to dig deeper.

What you should do right now

If you are deploying AI or planning to, find the answers to these three questions:

  1. Can your AI systems explain every decision in a way a state regulator would accept?
  2. Do you have a jurisdiction-aware governance framework that adapts to the requirements of every state you operate in?
  3. Is your compliance team involved in AI deployment from day one, or do they find out about new models after they're already in production?

AI in insurance is no longer optional. But deploying it without explainability isn't innovation — it's recklessness. The regulators have made their move. The question is: Is your architecture ready to answer?


For reprint and licensing requests for this article, click here.
Artificial intelligence State regulators Insurance Risk management
MORE FROM DIGITAL INSURANCE