NAIC starts work on AI evaluation method for regulators

Man touching AI prompt on virtual screen
tippapatt - stock.adobe.com

Takeaways:

Processing Content
  • Proposed questionnaire elaborates on previous 2023 guidance
  • NAIC regulators so far have reviewed just one of four parts of the proposal
  • Critics from the industry say the plan will generate more data than needed

To go beyond its 2023 Model Bulletin guidance, the National Association of Insurance Commissioners (NAIC) is now working on a means of requiring insurers to disclose more about their AI systems.

The Model Bulletin is mostly comprised of governance and risk management policies and procedures for insurers' use of AI, while the proposed AI Systems Evaluation Tool sets out a questionnaire template for regulators to learn about applications that insurers use.

The tool will have four parts, according to materials presented at NAIC's fall meeting earlier this month. First, quantifying use of AI systems; second, a governance risk assessment framework; third, details for a high-risk model; and fourth, details for model data. 

Doug Ommen, Iowa insurance commissioner
Doug Ommen, Iowa insurance commissioner

NAIC's Big Data and Artificial Intelligence Working Group on December 7 heard a presentation and discussion of changes in a second draft of the plan, led by Doug Ommen, the group's co-vice chair and Iowa's insurance commissioner. The session lasted four hours and covered just the first part of the tool. The group will have more meetings in January and February to discuss the other parts and intends to draft a third version of the tool, Ommen said in the meeting.

Portions of the first part that were revised included how AI will be monitored in market conduct or financial condition examinations; confidentiality protections; and methods of coordinating with regulators. The discussion also explored what AI models will be the tool's focus, how the tool will assess and measure insurers' AI models, what parts of insurers' operations will be scrutinized and what AI algorithms will be evaluated or not.

The tool will put templates and exhibits in regulators' hands to start conversations around AI applications being used in the business, Ommen said in an interview with Digital Insurance.  

John Romano of Baker Tilly
John Romano, principal, Baker Tilly

John Romano, a principal at auditing firm Baker Tilly who specializes in insurance company audits, sees NAIC's proposed AI tool as an inventory device, rather than an evaluation device. NAIC is trying to understand how insurers are using AI and how they assess risk, Romano said. He anticipates insurance industry resistance.

"The pushback from industry is this could just lead to more disclosure of information than the states know what to do with," he said. "They ultimately want to know what they are going to do with all this information."

NAIC's efforts to develop the AI tool or questionnaire are a means for various state regulators – who do not always agree – to reach an accepted standard for how to address AI in the insurance industry, explained Heidi Lawson, a partner in Fenwick's insurance, insurtech and financial services practice. Lawson and the practice specialize in representing tech companies engaged with the insurance and financial industries.

Heidi Lawson of Fenwick
Heidi Lawson, partner at Fenwick

"There's a wide range of understanding of AI among the regulators," she said. "If they agree on one thing that could be helpful, because otherwise, it feels like a very high chance it's going to be misunderstood."

While 24 states have adopted NAIC's AI Model Bulletin, so far 10 states have agreed to use the AI Systems Evaluation Tool, according to a participant in the NAIC group's December 7 meeting. 

Lawson suggested that other methods could be more effective for regulating AI. "There are some companies now that test AI accuracy that could run diagnostics," she said. "That would be a lot more efficient and accurate, on whether there's any drift in the AI models. A Q&A is quite old fashioned and it will have some value, but it will probably be quite limited."

Days after the meeting, on December 11, President Trump issued an order banning AI regulation at the state level. However, Ommen said, that addresses state legislation governing the development of AI, not how insurance or other industries use AI.

"As a state insurance regulator, my concern is not regulating the development," he said. "My concern is making sure that in the use of any tool such as AI, consumers are treated fairly and appropriately in their business with insurance companies."

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance
MORE FROM DIGITAL INSURANCE