Industry the July/August 2024 issue

AI Chalk Lines Are Being Drawn

New laws and best practices aim to prevent legal and security risks and other hazards in using artificial intelligence.
By Scott Sinder, Tod Cohen, Elizabeth Goodwin, Maria Avramidou Posted on July 19, 2024

In the insurance industry, AI is used in underwriting, customer service, claims processing, marketing, and fraud detection.

While AI brings many benefits, it also presents legal and regulatory challenges. These include:

  • Data privacy and security issues due to the massive amounts of data needed for training and operating generative AI tools like Microsoft Copilot, Google Gemini, or OpenAI’s GPT-4o
  • Bias and unfair discrimination since AI systems can perpetuate and even amplify existing biases if they are trained on biased data
  • Lack of transparency or explainability, as complex machine-learning algorithms may be “black boxes” so that humans cannot explain their decision-making processes for important eligibility determinations such as insurance pricing, underwriting, or claims adjustment.

Policymakers are beginning to respond to these challenges. Adoption of best practices should bolster the business case for using AI tools to the benefit of all insurance industry stakeholders.

The new Colorado law applies to both AI “developers” and “deployers.” A developer develops or intentionally and substantially modifies an AI system. A deployer uses a high-risk AI system in which the tool uses the deployer’s data.

Legal Developments

NAIC Model Bulletin > In December 2023, the National Association of Insurance Commissioners’ (NAIC) Innovation, Cybersecurity, and Technology Committee adopted the “Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers.” While not a model law or regulation, the Model Bulletin serves as a guiding document for the industry. Several states have formally adopted the Model Bulletin, including Alaska, Connecticut, Illinois, Maryland, New Hampshire, Pennsylvania, Rhode Island, and Vermont.

The Model Bulletin sets expectations regarding the use of AI, including that decisions or actions impacting consumers made or supported by AI must comply with all applicable insurance laws and regulations, including laws that address unfair trade practices and unfair discrimination. It advises insurers to develop, implement, and maintain a written artificial intelligence system (AIS) program for responsible use of AI systems that make or support decisions related to regulated insurance practices. Such AIS programs should be designed to mitigate “adverse consumer outcomes,” which the Model Bulletin defines as “a decision by an Insurer that is subject to insurance regulatory standards enforced by the [state Department of Insurance] that adversely impacts the consumer in a manner that violates those standards.”

Colorado AI Law > On May 17, 2024, Colorado Governor Jared Polis signed into law the nation’s first comprehensive private-sector AI bill. Like the European Union’s AI Act discussed below, the Colorado law implements a risk-based regulatory approach that addresses mainly “high-risk artificial intelligence systems,” which it defines as a system that, “when deployed, makes or is a substantial factor in making a consequential decision.” A “consequential decision” has a material legal or significant effect on the provision or denial of insurance to any consumer or on the cost or terms of various services.

The new Colorado law applies to both AI “developers” and “deployers.” A developer develops or intentionally and substantially modifies an AI system. A deployer uses a high-risk AI system in which the tool uses the deployer’s data.

The law requires both developers and deployers to use reasonable care to avoid algorithmic discrimination in high-risk systems and creates a rebuttable presumption that a developer or deployer is using reasonable care if it takes specific compliance actions when deploying the system. For example, deployers of high-risk systems will be required to:

  • Implement a risk management policy and program
  • Annually review the deployment to ensure the high-risk system is not creating algorithmic discrimination
  • Notify a consumer of specified items, including the nature of the consequential decision and contact information for the deployer, if the high-risk system makes a consequential decision concerning the consumer.

The new law supplements existing Colorado anti-discrimination regulations that govern life insurers’ application of algorithms and predictive models that use external consumer data and information sources.

The EU AI Act > This summer, the European Union is expected to formally adopt the EU AI Act. Like the Colorado law, the EU AI Act will apply to developers and deployers, but it also will apply to importers, distributors, manufacturers, and others in the AI value chain. And like the EU’s privacy regulation regime, the AI Act’s territorial scope is broad and will impact organizations globally. That includes those active in the insurance industry such as insurance agents and brokers if they have European users, employees, or sufficient targeted connections with the EU.

Similar to the Colorado law, the EU AI Act will take a risk-based regulatory approach and has a broad set of obligations that will apply to high-risk AI systems. The EU AI Act goes farther, though, as it completely bans AI systems that are viewed as presenting an unacceptable level of risk (such as certain facial recognition and social scoring systems).

Pursuant to the EU AI Act, if your firm has any European exposure, you must:

  • Assess whether each AI system being developed or used falls within the scope of the EU AI Act and its classifications, such as prohibited or high-risk uses
  • Evaluate one’s role generally as a deployer of high-risk systems for life or health insurance
  • Map out and comply with applicable obligations
  • Monitor relevant adoption and implementation of the required EU AI Act regulations.

Best Practices

These developments are undoubtedly just the beginning. Other policymakers are considering new broad-based and insurance-specific AI laws and regulations.

As Congress and the states continue to debate AI regulations and laws, insurance agents and brokers should be mindful of best practices that are already envisioned in the current legislation and regulations.

  • Disclose the specific use of AI in insurance-related processes, including underwriting and marketing
  • Explain the decision-making process related to the AI models and systems used, along with the outputs from these models
  • Comply with all applicable laws, including those related to discrimination, unfair trade practices, and data privacy
  • Ensure that third-party providers of AI models and applications comply with applicable laws
  • Document the AI model’s data sources and compliance processes to ensure compliance with applicable laws and justify decisions to state insurance departments in the event of an investigation.

The age of AI is upon us, and with it come new regulatory obligations and burdens. Are you ready?

Scott Sinder Chief Legal Officer, The Council; Partner, Steptoe Read More
Tod Cohen Partner, Steptoe, Government Affairs and Public Policy Group Read More
Elizabeth Goodwin Associate, Steptoe, Government Affairs and Public Policy Group Read More
Maria Avramidou Associate, Steptoe, Brussels Office Read More

More in Industry

Big Buyers Selling to Bigger Buyers
Industry Big Buyers Selling to Bigger Buyers
As the brokerage industry continues to consolidate, is the buyer pool shrinking?...
Industry When Disaster Strikes
Federal agencies are ready to provide assistance for the victims of hurricanes a...
The Opportunity to Watch and Learn
Industry The Opportunity to Watch and Learn
Council board chair Keith Schuler offers insights gained from working closely wi...