Skip to Content
Technology professionals at work discussing how the NAIC AI model bulletin is evolving.
Article

How the NAIC AI model bulletin is evolving and why insurers should prepare now

March 24, 2026 / 5 min read

As the NAIC considers potential updates to the AI model bulletin, insurers should expect AI governance — and the supporting cybersecurity controls — to remain a regulatory priority. Proactive alignment today may reduce compliance, examination, and enforcement risk in the future.

The guidance surrounding AI in insurance continues to evolve. Insurance companies can’t wait for these efforts to be completed, and the questions they’re facing are becoming increasingly more specific. Can you demonstrate how your AI models are governed? How they’re monitored, secured, and audited? For many insurers, the answer is still a work in progress.

Today, AI is integrated into critical functions such as data entry, underwriting, claims processing, pricing strategies, fraud detection, and customer engagement by many insurers. This widespread adoption has shifted the focus beyond responsible use toward a more comprehensive examination of how AI systems are governed, monitored, and protected. The National Association of Insurance Commissioners (NAIC) has recently introduced significant initiatives relevant to the use of AI, which indicate a heightened level of scrutiny and the possibility of enforcement actions related to AI governance and cybersecurity controls. As a result, the NAIC is broadening its oversight to include how you govern, monitor, and secure your AI systems, highlighting the growing importance of strong cybersecurity controls in regulatory evaluations.

The NAIC AI Model Bulletin was released in 2023 as a principles-based guide to help insurers use AI responsibly. At the time, most insurers were still experimenting with AI, so the bulletin focused on high-level concepts, like transparency, fairness, accountability, and risk management, rather than detailed requirements or enforcement. Since then, more states have adopted or referenced the bulletin and incorporated its expectations into their broader regulatory frameworks.

As AI continues to become embedded in core insurance operations, risk is increasingly being viewed through a cybersecurity lens. The NAIC has formed a dedicated AI working group and is developing an AI systems evaluation tool — steps that point toward more structured oversight and regulation of the cybersecurity controls that underpin AI operations. Here’s what insurers should expect to be evaluated more closely.

Data security and integrity

Protecting the integrity of data that feeds AI models requires oversight. Insurers will be expected to demonstrate that AI data is protected against unauthorized access, alteration, or loss throughout its life cycle. This requires strong safeguards for sensitive information, including PII, PHI, and proprietary data, such as edit checks, encryption at rest and in transit, and clear data classification controls. To comply effectively, insurers should put governance principles into action by establishing practical controls. This may mean creating formal approval processes for AI use cases, designating specific points where a human reviews AI decisions — especially when those decisions involve sensitive data — to ensure appropriate restrictions are in place and maintaining a thorough record for audits and regulatory checks. Without these practical measures, even well-documented AI policies may not protect you from regulatory, operational, or reputational risks.

Access and change management controls

Effective AI governance should have clear control over who can access, modify, and deploy AI models. This means demonstrating that AI models are protected from unauthorized changes, and that updates move through formal, documented workflows. Strong access controls — role-based access, segregation of duties, regular access reviews — help limit insider risk. Change management practices should include documented approvals, testing, validation, and rollback procedures aligned with model risk management standards.

Effective AI governance should have clear control over who can access, modify, and deploy AI models.

For insurers operating at scale, this level of discipline around model access is also a practical safeguard. If something goes wrong with your AI model, your ability to trace who changed what — and when — can be the difference between a contained incident and a regulatory investigation. Traceability in AI models matters because it takes the “black box” problem away and replaces it with accountability — so you can clearly see which model version, training data, or human decision led to an issue.

Third-party and vendor risk management

Many insurers rely on third-party vendors, making third-party and vendor risk management critical for AI oversight. Programs should explicitly address AI-related risks, including data security, model updates, incident response, and regulatory compliance. You should assess whether your vendors adhere to established security and AI governance frameworks, and make sure your contracts give you the right to conduct audits, access necessary documentation, and receive prompt notification of any incidents. By taking these steps, you’ll find yourself not only safeguarding against third-party risks but also preparing for situations that demand a swift and coordinated approach.

Incident response

Traditional incident response programs often focus on data breaches, system outages, or malware infections. For AI-enabled environments, insurers must broaden their definition of an “incident” to include AI-specific events. You should ensure that you’ve formally identified AI-specific scenarios and integrated them into your incident classification and response frameworks. By proactively addressing these unique incidents, you can demonstrate that your organization is prepared to manage risks associated with AI and respond effectively when issues arise.

Proactive alignment as a competitive advantage

As AI becomes even more integrated into key day-to-day insurance operations, cybersecurity maturity will be a defining indicator of effective AI governance. Insurers that fail to strengthen cybersecurity controls may find it difficult to demonstrate that their AI systems are safe, reliable, and compliant. For many insurers, this means engaging external providers to bring specialized expertise, perform independent gap assessments, and benchmark controls against regulatory and industry standards. These providers can also help design, implement, and sustain regulator-ready AI governance programs.

With new working groups, evolving frameworks, and an AI systems evaluation tool already in development, the infrastructure for more rigorous examination is developing in real time. Your AI governance program will be evaluated. Readiness requires thoughtful preparation, intentional gap closure, and a program developed for your AI systems.

That’s why the right partner matters. Bringing in someone with industry experience allows you to focus your priorities, sequence them effectively, and scale as expectations continue to move at the rate of the technology. Industry standards will continue to evolve as AI becomes more embedded in insurance operations. Ultimately, insurers must be able to show that their AI governance is consistently and verifiably integrated into the systems and processes where their AI lives.

If your data foundation isn’t strong, AI won’t deliver reliable results. Evaluate your systems and data quality to pinpoint gaps prior to AI adoption. 

Related Thinking