Join the Great Agent Hack 2025 - Win big from £50,000 in cash & credits (£30,000 secured)!
Register Now
Learn more about EU AI Act
AI Governance

AI Governance in Insurance: How U.S. Regulators Are Setting New Standards

The insurance sector is one of the leading adopters of artificial intelligence (AI). From underwriting to claims management, AI and predictive analytics have driven efficiency and accuracy for decades. However, even in an industry well-versed in AI, its deployment still carries new and evolving risks.

Indeed, the National Association of Insurance Commissioners (NAIC) has issued a Model Bulletin on Insurers’ use of AI for regulators to distribute in their State. The Bulletin, building on its Principles on Artificial Intelligence adopted in 2020, establishes clear expectations for insurers regarding AI governance, fairness, and accountability. In this blog post, we outline the key elements of the NAIC AI Model Bulletin, summarize state-level adoption, and highlight what insurers should do now to ensure compliance.

Existing Laws Already Regulate AI Use in insurance

While some states, such as Colorado, have enacted specific legislation targeting algorithms and predictive models in insurance, the absence of AI-specific laws elsewhere does not create a compliance loophole.

The NAIC Model Bulletin reinforces that existing laws already apply to AI-driven insurance practices. This includes laws and regulations governing Unfair Trade practices, Unfair Claims Settlement Practices, Corporate Governance Annual Disclosure, Property and Casualty Model Rating, and Market Conduct Surveillance.

How Insurers Can Identify AI Systems in Their Operations

Understanding whether your systems fall under the regulatory definition is a foundational compliance step. Tools that automatically generate outputs or make insurance-related decisions based on data may qualify as AI, depending on how they function.

Under the NAIC bulletin, AI is defined as data processing systems or devices that perform functions that are usually associated with human intelligence, including reasoning, learning, and self-improvement. Machine learning is defined as subset of AI that learns from data without being explicitly programmed.

AI systems are machine-based systems capable of generating outputs—such as predictions, recommendations, and content—for specific objectives that influence decisions in real or virtual environments. They also act with varying levels of autonomy. This definition generally aligns with the EU AI Act and other global frameworks.

The bulletin also addresses generative AI, defining it as a class of AI systems that generate data, text, images, sounds, or video that resemble pre-existing data or content.

Insurers Must Implement an AIS Program to Reduce AI Risk

The NAIC Model Bulletin calls for insurers to develop, implement, and maintain a written AIS program for the responsible use of AI. The goal is to reduce Adverse Consumer Outcomes, which are defined as decisions made by insurers that adversely  consumers and violate regulatory standards.

The AIS Program should address:

  • Governance – transparency, fairness, and accountability across the AI system lifecycle through documented oversight structures.
  • Risk management and internal controls – documented risk identification, mitigation, and management for developing, adopting, or acquiring AI systems, as well as data governance, validation, and accountability mechanisms.
  • Third-party AI systems and data –due diligence and audit processes for third-party tools and datasets, and agreements on cooperation with regulatory inquiries.

Insurers are also encouraged to develop verification and testing methods that identify bias, drift, and error in AI systems and predictive models. Colorado’s SB21-169 regulations provide a helpful precedent for conducting bias audits of insurance algorithms.

State-by-State AI Governance Updates: How Regulators Are Enforcing the NAIC Bulletin

Regulators in multiple US states have adopted the NAIC Model Bulletin, signalling growing alignment around AI governance in insurance.

Practical Steps for Insurers to Build Responsible AI Governance

As regulatory expectations evolve, insurers should take immediate steps to build transparency, fairness, and accountability into their AI practices.

  • Create an AI inventory covering all first and third-party tools.
  • Establish governance mechanisms throughout the AI system lifecycle – from design and training to monitoring and retirement
  • Monitor regulatory developments to maintain continuous compliance readiness.

How Holistic AI Empowers Insurers to Govern AI Responsibly and Compliantly

Holistic AI is an AI governance platform that enables confident AI deployment and accelerates innovation for enterprises across the U.S. and Europe. Our platform enables organizations to:

  • Identify and inventory AI systems across business units
  • Assess and mitigate risk through testing, bias/fairness evaluation, red teaming, and other risk management practices
  • Align teams to centralized AI usage guidelines and model governance controls
  • Demonstrate compliance with NAIC-aligned AI governance expectations

By integrating Holistic AI’s governance platform, insurers can govern and assure AI systems, reduce regulatory exposure, and increase ROI on AI investments.

Get a demo now!

Table of contents

Stay informed with the latest news & updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo