The insurance sector is one of the leading adopters of artificial intelligence (AI). From underwriting to claims management, AI and predictive analytics have driven efficiency and accuracy for decades. However, even in an industry well-versed in AI, its deployment still carries new and evolving risks.
Indeed, the National Association of Insurance Commissioners (NAIC) has issued a Model Bulletin on Insurers’ use of AI for regulators to distribute in their State. The Bulletin, building on its Principles on Artificial Intelligence adopted in 2020, establishes clear expectations for insurers regarding AI governance, fairness, and accountability. In this blog post, we outline the key elements of the NAIC AI Model Bulletin, summarize state-level adoption, and highlight what insurers should do now to ensure compliance.
While some states, such as Colorado, have enacted specific legislation targeting algorithms and predictive models in insurance, the absence of AI-specific laws elsewhere does not create a compliance loophole.
The NAIC Model Bulletin reinforces that existing laws already apply to AI-driven insurance practices. This includes laws and regulations governing Unfair Trade practices, Unfair Claims Settlement Practices, Corporate Governance Annual Disclosure, Property and Casualty Model Rating, and Market Conduct Surveillance.
Understanding whether your systems fall under the regulatory definition is a foundational compliance step. Tools that automatically generate outputs or make insurance-related decisions based on data may qualify as AI, depending on how they function.
Under the NAIC bulletin, AI is defined as data processing systems or devices that perform functions that are usually associated with human intelligence, including reasoning, learning, and self-improvement. Machine learning is defined as subset of AI that learns from data without being explicitly programmed.
AI systems are machine-based systems capable of generating outputs—such as predictions, recommendations, and content—for specific objectives that influence decisions in real or virtual environments. They also act with varying levels of autonomy. This definition generally aligns with the EU AI Act and other global frameworks.
The bulletin also addresses generative AI, defining it as a class of AI systems that generate data, text, images, sounds, or video that resemble pre-existing data or content.
The NAIC Model Bulletin calls for insurers to develop, implement, and maintain a written AIS program for the responsible use of AI. The goal is to reduce Adverse Consumer Outcomes, which are defined as decisions made by insurers that adversely consumers and violate regulatory standards.
The AIS Program should address:
Insurers are also encouraged to develop verification and testing methods that identify bias, drift, and error in AI systems and predictive models. Colorado’s SB21-169 regulations provide a helpful precedent for conducting bias audits of insurance algorithms.
Regulators in multiple US states have adopted the NAIC Model Bulletin, signalling growing alignment around AI governance in insurance.
As regulatory expectations evolve, insurers should take immediate steps to build transparency, fairness, and accountability into their AI practices.
Holistic AI is an AI governance platform that enables confident AI deployment and accelerates innovation for enterprises across the U.S. and Europe. Our platform enables organizations to:
By integrating Holistic AI’s governance platform, insurers can govern and assure AI systems, reduce regulatory exposure, and increase ROI on AI investments.


Get a demo
Get a demo