What's the future of AI regulation?

A person pressing a holographic AI icon.
Adobe Stock

Nichole Windholz, chief information security officer at Onspring, shares written responses with Digital Insurance on AI regulations and action items for implementing internal governance.

The One Big Beautiful Bill Act proposed a 10-year ban on state-level regulations to AI models and systems. The final version of the bill did not include it.

headshot of Nichole Windholz
Nichole Windholz

Can you share how the AI provision in the "One Big Beautiful" bill may impact the insurance industry?

Even though the bill's final version removed the proposed 10-year moratorium on state-level AI regulation, the underlying tension it highlighted is still very real. For insurers that are already managing compliance across multiple states, this continued piecemeal approach to AI regulation creates a network of risks. You're no longer just thinking about HIPAA or SOC 2 compliance; now you're navigating AI-specific guidance that may vary dramatically from one jurisdiction to the next. For insurers, this means internal governance structures should keep pace with how AI is being used for everything from underwriting automation to predictive claims models. It puts the burden back on internal teams to create governance models that are nimble enough to respond to that variability, while still setting and enforcing clear boundaries across departments.

Why has Onspring formed an AI Governance Council and what kind of work is it doing?

We formed the council proactively because we knew the questions would come. Our clients are asking how to manage AI use responsibly, so we felt it was important to model that internally. The council is focused on AI use that is safe and deployed thoughtfully. We kicked things off by creating a clear usage policy and an approved tools list, but it's evolved into something much more collaborative. It's a forum where we talk through use cases, assess risk and share findings across teams. And because we have a diverse set of voices and insight from product, sales, development and leadership at the table, we're able to look at AI adoption from every angle.

What do policies around Gen AI and data privacy look like?

We treat data privacy as non-negotiable. One of the first things we made clear in our policy is that sensitive company data like client information, financials and anything proprietary should never be entered into a public LLM. That may seem obvious, but in the rush to use these tools for convenience, it's easy for your teams to forget. An action like entering policyholder or claims data into an unvetted LLM tool, however unintentionally, could breach confidentiality or even trigger compliance violations.

We also emphasize that AI isn't giving you a guaranteed accurate answer; it's giving you the most statistically likely one. That distinction matters. Employees should be educated on this so they understand that outputs still need human review, especially if they're being used in client-facing materials or compliance workflows.

What action items can insurance companies take to implement internal AI governance?

AI isn't something you can avoid. It's here, readily accessible and employees are curious. That curiosity means AI will inevitably find its way into daily work. Being proactive is a mandate, not a nice-to-have. Start with visibility. You can't govern what you don't know is happening, and most companies would be surprised at how many AI tools are already in use informally. Next, create a policy that's practical and clearly outlines guidance that shows employees what is allowed, and where they can go for support. We also found that cross-functional input is key. This isn't just an IT issue or a legal concern. Sales teams, marketers, analysts, etc. – they're all using these tools in different ways. In addition to your legal and compliance teams, it's a good idea to bring in underwriters, claims leaders and actuarial teams. Having everyone represented in your governance efforts gives you a much fuller picture of risk and opportunity.

What does continuous testing of AI algorithms look like and who does that work?

The Onspring AI Governance Council was originally formed in response to our internal use of third-party AI tools. So right now, our testing primarily revolves around output monitoring and access controls. Our focus is on tracking how these tools are used and ensuring we're not inadvertently introducing risk. In addition, we are embedding AI into our actual products. With AI in the Onspring platform, we have created a QA process that looks more like model validation, including testing for bias, tracking hallucinations and reviewing performance against expected outcomes. But even without validating AI in our platform, our Governance Council is accountable for checking in on usage trends, policy exceptions and any red flags.

Anything else you would like to share?

Overall, the key message here is proactivity. AI adoption in the workplace feels similar to where social media was about a decade ago – initially seen as a novelty, then suddenly everywhere and with major implications for business (both positives and negatives). In the early days, policies were reactive measures that came after issues had already surfaced. With AI, we don't have that kind of runway. It's critical to create a foundation for safe use, and that's what we're doing with the formation of the council. We want to get ahead of potential risks and offer guidance before things go off track.

To round things out, governance and internal guardrails around AI are not a push to halt use. We're advocates of AI, but it's mission-critical to channel it responsibly. We've reached the point where almost every team at Onspring is leveraging AI in some way. Instead of trying to clamp down on that innovation, we've created a framework that supports it safely.

For reprint and licensing requests for this article, click here.
MORE FROM DIGITAL INSURANCE