InsureThink

Navigating the complex landscape of AI regulation in insurance

Image of AI created out of computer chips and parts.
Adobe Stock.

Insurance companies have enthusiastically adopted artificial intelligence to revolutionize their business operations, with this technology powering everything from claims processing and marketing campaigns to market research. However, this transformation has created a significant regulatory challenge that threatens to overwhelm carriers unprepared for the complexity ahead.

Without federal oversight establishing uniform standards for AI use or data privacy protection, individual states have stepped in to create their own rules. Twenty-four states currently enforce AI or data privacy regulations, and more legislation is expected as state legislatures return to session in January. For insurance carriers conducting business nationwide, complying with this patchwork of differing requirements demands robust governance strategies for both AI systems and the information they process.

Understanding the regulatory patchwork

The National Association of Insurance Commissioners (NAIC) developed model guidelines in 2023 to help insurers implement AI ethically and securely. As mentioned, 24 states have adopted legislation based on these recommendations, which emphasize auditing procedures, transparent governance structures, effective risk management protocols, and vendor oversight. While these guidelines provide a helpful starting point, state implementations vary considerably in their specific requirements.

Data privacy requirements illustrate this variation clearly. Five states—California, Colorado, Connecticut, Maryland, and Minnesota—mandate that companies enable consumers to express privacy preferences automatically through universal opt-out tools. Tennessee imposes no such requirement, demonstrating how obligations differ even among states with privacy protections.

Some jurisdictions impose additional restrictions. New Jersey requires parental consent before processing data from teenagers aged 13-17 for targeted advertising or profiling purposes. Maryland establishes even stricter standards by requiring that any processing of sensitive data be strictly necessary for service delivery and prohibiting the sale of such information—standards that exceed both Colorado's adequacy requirements and California's reasonableness tests.

Beyond privacy protections, certain states regulate how AI influences consumer-affecting decisions. Colorado's Artificial Intelligence Act establishes extensive requirements for "high-risk" systems, requiring organizations to prove their algorithms don't discriminate. To demonstrate compliance with anti-discrimination regulations, insurers will need to feed AI systems personally identifiable information, which triggers additional data privacy compliance obligations.

Many states also require insurers to archive data, models, and testing artifacts used to validate AI performance, making these materials available to regulators upon request. Colorado's law grants consumers the right to understand AI profiling decisions, learn how to achieve different outcomes, review their personal information used in profiling, correct errors, and request decision reevaluations based on corrected data. These provisions create substantial data retention and retrieval obligations that must be integrated into AI governance from the beginning.

Compliance triggers also vary significantly across jurisdictions. Maryland's requirements apply to companies serving 35,000 customers who derive over half their revenue from selling personal information. Montana's threshold is 25,000 customers, Tennessee's is 175,000, and Minnesota's is 100,000. Carriers must carefully monitor their customer counts in each state to identify when compliance obligations begin.

Building effective governance

Comprehensive, automated data governance provides a foundation for sustainable compliance in these highly regulated environments. In comparison, manual classification methods lack the flexibility and scalability required by multi-state operations. Insurance carriers should implement discovery and management platforms that autonomously identify and govern data, tagging information with appropriate sensitivity classifications and tracking its movement through AI workflows.

Effective frameworks must address not just data access, but also usage patterns, processing locations, and generated outputs. This comprehensive approach requires detailed tracking systems to maintain records of data lineage, creating audit trails that follow information as AI systems transform it.

Governance frameworks must also accommodate multiple regulatory requirements simultaneously through automated controls that enforce different standards based on data types, user locations, and processing purposes. Continuous monitoring should alert stakeholders when AI systems operate outside approved boundaries while producing detailed audit trails and impact assessments.

As regulations continue to evolve, insurance carriers need solid governance foundations to maintain compliance across all jurisdictions. The solution lies in developing adaptable frameworks that accommodate new requirements while preserving operational efficiency. Organizations mastering today's compliance challenges will be better positioned to leverage tomorrow's AI innovations while maintaining consumer trust and regulatory approval.

For reprint and licensing requests for this article, click here.
Artificial intelligence Law and regulation Data security
MORE FROM DIGITAL INSURANCE