Industry the October 2024 issue

New Rules for New Technologies

The European Union’s AI Act creates additional regulations for insurers and brokers, but also opportunities for new business.
By Chris Larson Posted on October 1, 2024

The European Union is at the forefront of regulating artificial intelligence after enacting its AI Act this summer. Its rules are likely to provide a template for regulators elsewhere.

Insurance companies are increasingly investing in AI for underwriting and other functions, so brokers and carriers are expected to feel impacts from the EU regulations.

The EU rules could also lead to new insurance products, such as corporate liability coverage for lawsuits related to artificial intelligence.

In a series of posts on the site then called Twitter, the carrier said it was using bots and machine learning to make insurance “instant, seamless and delightful”—while also “producing nuanced profiles of our users and remarkably predictive insights” and ultimately lowering its loss ratios.

Lemonade was especially proud that policyholders could submit any claim by explaining what had happened via a smartphone video. The thread said its artificial intelligence (AI) models could “pick up non-verbal cues” and analyze the videos for “signs of fraud.”

Critics took that to mean Lemonade was using facial recognition to identify, and then reject, fraudulent claims. Those critics, including privacy advocates and AI experts, pointed out that facial recognition technology was known for being particularly unreliable when analyzing faces that are not Caucasian or male. Lemonade thus could easily be incorrectly dismissing claims made by anyone but white men.

The carrier quickly backtracked, deleting the Tweet thread and calling the phrase “non-verbal cues” a “bad choice of words.” Lemonade said it used facial recognition only to flag claims made “by the same person under different identities” and that those flagged submissions were reviewed by a human before any action was taken. The company emphasized that it “never let AI perform deterministic actions such as rejecting claims.”

The intense response to Lemonade’s original statement underscored growing and still-unresolved concerns over the rapid adoption of artificial intelligence—including large language models, natural language models, and generative AI—in nearly every industry, including insurance. Among the fears are output bias, data security breaches, and loss of privacy.

These worries have encouraged lawmakers around the world to draw up rules for the development and use of AI. The most wide-ranging to date was enacted this summer: the European Union’s AI Act.

Some insurers view the EU requirements as more significant and the fines as more serious. They are looking at undertaking those changes for all of their operations, so that they don’t have a patchwork of risk management frameworks to deal with for compliance purposes in the EU versus in the U.S. versus other places in the world.
Philip Dawson, head of AI policy, Armilla AI

The EU law could be used as a model for similar regulations in the United States, though experts are divided on how likely that is to happen. Regardless, the law will impact how U.S. brokers and insurers approach their use of AI systems. For carriers with global operations in particular, that means inventorying and assessing the AI systems they use and verifying that the insurer, and the entity that actually developed the system, are meeting all the new regulatory obligations.

“There’s some parallel regulatory development at the state level in the U.S.,” says Philip Dawson, head of AI policy at artificial intelligence risk management specialist Armilla AI. “Some insurers view the EU requirements as more significant and the fines as more serious. They are looking at undertaking those changes for all of their operations, so that they don’t have a patchwork of risk management frameworks to deal with for compliance purposes in the EU versus in the U.S. versus other places in the world.”

“I think [insurers] are very much looking at the EU, because it is the most comprehensive and thorough law we have globally,” says Jody Westby, CEO of Global Cyber Risk. “The U.S. needs to better understand what the EU AI Act is, and the insurance industry should be a leader in advancing that understanding.”

The European Union’s AI Act establishes four broad risk tiers for AI technology: minimal, limited, high, and unacceptable. The regulatory framework tightens with each tier, with full prohibition of technologies that are deemed unacceptable—for example, a system that could be applied to manipulate human behavior. (See Sidebar: What is the EU AI Act?)

“The impact of the EU AI Act on the insurance industry will be very significant, particularly for life and health insurers,” which use technologies that could be considered high-risk, Dawson says.

Other AI models that insurers and brokers currently use would fall in the limited or minimal-risk category, with regulations focused on transparency of use or AI literacy, referring to skills, knowledge and understanding that allow developers, users and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

The European Artificial Intelligence Act officially came into force on Aug. 1. The goal, in the words of the European Commission, is to “foster responsible
artificial intelligence development and deployment.”

The Act is considered to be the world’s first comprehensive framework around AI and its uses, and came after several years of negotiating between governments, companies, and other stakeholders.

It covers development, deployment, and use of various AI models within the European Union, and is meant to ensure that all AI technologies are safe. It particularly encourages transparency and accountability by both developers and users.

The Act classifies AI systems into different tiers based on their perceived risk. Those considered to carry an “unacceptable risk”—like systems that purport to predict whether a person will commit a crime—are banned outright.

“High-risk” AI models face strict regulations, including extensive testing and oversight; they include models dealing with infrastructure and critical public goods, like healthcare.

Among the AI systems that the new law considers high-risk are those that life and health insurers use for pricing and risk assessment, on the grounds that these could have major impacts on a person’s life or health.

AI models used to detect fraud in financial services and insurance are not generally considered high-risk, the new law says, though the use of biometric data is either prohibited entirely or classified as high-risk.

Models that are ranked as having “minimal” risk face far fewer regulations, though they will still have obligations around AI literacy.

The Act’s provisions begin taking effect next February, when certain very-high risk models are banned entirely. Other provisions will come online over the next couple of years. Violations can be costly: if a prohibited system is used, it can mean fines of up to €35 million ($38 million) or 7% of a company’s annual global sales, whichever is higher.

Industry Investing in AI

The insurance industry has long used big data and machine learning for various tasks. Insurers are also using it more as AI advances. In a fall 2023 report, investment management firm Conning said 77% of surveyed senior insurance executives said their company was using at least some kind of AI, up from 61% a year earlier.

“What is new to insurance, like it’s new to everyone, is the capabilities of generative AI,” says James Clark, a partner at law firm DLA Piper. “That is starting to come online in the insurance industry, with things like chatbots that could increasingly replace human handlers” across numerous business functions for both brokers and insurers.

Insurers are particularly embracing AI in underwriting. AIG, for instance, is working on systems that provide “faster, more thorough, deeper analysis and improved customer service in quoting, binding, and policy issuance,” Chairman and CEO Peter Zaffino said during an earnings call in August.

Those efforts include large language models that will automatically filter submissions through the company’s underwriting guidelines, increasing the number of submissions AIG’s underwriters could process than if they were doing the work manually, Zaffino said.

Submission data will also be run through AIG’s underwriting guidelines and portfolio objectives, allowing the company to “more deeply and accurately analyze market conditions and enable dynamic adjustments to underwriting guidelines, pricing, and limit deployment,” he added.

Across the board, AI is being used for increased personalization of client accounts. “You now have AI models that can analyze more and more data points about an individual, including non-traditional data points that may not previously have been taken into account by an underwriter, for example, to deliver increasingly accurate and personalized insurance products in a cost-efficient way,” Clark says.

The back office is a common place for AI as a time-saving mechanism. “For instance, when emails come in, you can scan them using AI, and the AI will pass them to the right handler,” says Jochen Körner, CEO of Germany-based broker the Ecclesia Group. “Or it can summarize the issue and suggest what the case handler can do with the email.”
Artificial intelligence is likely to have the greatest industry impact in claims processing, according to Tod Cohen, a partner at law firm Steptoe who specializes in AI. “Because that’s where you’ve got both generative AI tools and more traditional AI tools being applied.”

For instance, a generative AI system can review an accident declaration form, extract the important information, and quickly input the data into a carrier’s claims management system while keeping policyholders informed of the claim status.

Regional Rules with Global Impact

The EU’s AI Act, along with other forthcoming regulations, could have a big impact on the industry, even for those brokers and insurers that are already using AI responsibly.
The EU regulations apply to any company that does business or has clients within the European Union, not just to brokers and carriers based in the 27-state bloc. That means numerous foreign companies will be caught up in the new rules. “This regulatory framework will probably have a global impact,” says Florian Pötzlberger, counsel at law firm Clyde.

Health and life insurance carriers that use AI for pricing and risk assessment are considered to be using high-risk technologies. That’s due in part to the potential for AI systems in those areas to “introduce bias into the risk categorization or pricing, leading to unlawful discrimination against protected groups,” Dawson says.

These systems won’t be banned, but insurers using them will need to meet high levels of transparency and governance, says Steptoe partner Anne-Gabrielle Haie, who also specializes in AI.

“In the EU, insurers face more stringent obligations with respect to ‘high-risk systems,’ which include AI used in life and health insurance risk assessments and pricing,” Dawson adds. “Key adjustments include conducting assessments of these systems, compiling technical documentation about them and related data sets, and verifying that third-party providers also fulfill their obligations.”

At the other end of the spectrum, insurers might only need to enhance their transparency about use of low-risk AI technologies. “For example, if you use a chatbot, then you have to ensure the customer is aware that they are communicating with an AI tool,” Pötzlberger says.

It’s still early days for the EU regulations; some bans on the highest-risk AI systems will take effect in February, while other rules won’t apply until 2026. But those dates will arrive quickly. “Most companies are just in the foothills of this now,” says Clark. “Over the next six to 12 months, it will build up on the compliance side.”

To that end, experts are advising brokers and insurers to prepare now. “Because the AI Act comes with a lot of internal documentation obligations, they definitely need to go through their current use and development projects and bring them in line with these requirements,” says Jan Spittka, a partner at Clyde whose practice areas include data privacy. This is especially true for those that are likely using high-risk AI models, he adds.

Companies must conduct an inventory of all AI systems they use to determine whether any fall under the new EU obligations, including the high-risk and unacceptable tiers, Haie says. “I think it’s very important to start now and not wait until the last minute.”

Even without legal requirements, it’s good business practice to be upfront about AI use. “From a reputational perspective, companies have an interest in following transparency rules,” Pötzlberger says.

I think [insurers] are very much looking at the EU, because it is the most comprehensive and thorough law we have globally. The U.S. needs to better understand what the EU AI Act is, and the insurance industry should be a leader in advancing that understanding.
Jody Westby, CEO, Global Cyber Risk

New Rules Demand New Products

The EU AI Act will create not just new commitments for insurers and brokers but the need for new insurance products to sell.

“AI is a technology that is here to stay, so we will also need insurance cover for those risks that evolve with it,” Pötzlberger says.

Few AI-specific products are on the market now, most experts agree, though Swiss Re said in late May that some carriers have started to offer “specific coverage for AI algorithm and performance risk.” Such risks include making business decisions based on incomplete data, flawed algorithm design, or an incorrect interpretation of the system’s output. Generative AI, meanwhile, creates risks if the models are trained on copyrighted materials that can lead to charges of copyright infringement.

The product set’s growth seems likely to follow a trajectory similar to the growth of cyber insurance a couple decades ago: as new technologies bring new risks, insurers and brokers develop and sell policies aimed at covering those risks.

Munich Re is one insurer that offers distinct policies to cover a range of financial and liability risks faced by commercial AI users, says Michael Berger, head of the company’s Insure AI team. Berger says the insurer offered the first AI insurance policy in 2018, a contractual liability policy backing the performance warranty offered by a vendor of a novel AI-based credit card fraud-prevention software.

“More recently, we have expanded our product line to cater to the rising use of AI and Gen AI in the corporate sector,” Berger says. Munich Re offers coverage for innovation and AI adaptation risk, which provides a performance guarantee targeted toward AI vendors that wish to guarantee return on investment on their products to their customers. It covers operational risk, including business interruption or other forms of lost revenue for corporations adopting AI. The company also provides liability coverage for AI vendors and corporate users seeking protection from lawsuits arising out of the use of artificial intelligence, including IP infringement and discrimination. Limits are between €5 million and €50 million per AI model, Berger notes.

Having a dedicated underwriting team and product line for this technology, Berger believes, positions Munich Re as a leader in the AI risk solutions market. But he says that across the wider industry “carriers are still in the early stages of when it comes to fully embracing the possibilities of insuring advanced AI technologies and AI models.”

Given the accelerating rate of corporate adoption of generative AI over the past two years, Berger says, “we see that carrier interest in participating in this area of insurance is increasing, just as we expect the demand for risk transfer to grow.”

Another early example is Armilla AI, which says it uses an “automated AI verification technology” to gauge the safety and trustworthiness of AI models, giving both providers and deployers some assurance that they can recoup their costs if the model fails.

Armilla says it’s offering a warranty, not insurance. Still, the product is backed by Swiss Re, Greenlight Re, and Chaucer; the latter two firms also invested in Armilla’s seed fundraising round. That early participation points to the insurance industry’s interest in getting AI right and also profiting from it—interests that will grow as AI models and regulations expand.

There are countless ways in which AI could expose an organization to liability or other damages. If a company uses a third-party AI-based chatbot for customer service, but that chatbot gives out wrong information, that could hurt the company’s reputation or worse. Or an insurer could employ a third-party AI-based fraud detection tool that performs worse than advertised, costing the carrier millions of dollars in fraudulent claims.

A government could use an AI system to create risk profiles in an effort to root out tax or benefits fraud. But if it’s poorly trained, it could unfairly penalize minorities or the poor. Something along these lines happened in the Netherlands in 2019, affecting tens of thousands of taxpayers.

Third-party AI liability insurance will eventually apply to such cases, although for now, they will probably fall under general liability as the market evolves.

“The liability landscape has not really changed. If I make a mistake that leads to damages, it doesn’t matter whether it was with the use of AI or without the use of AI—the general liability will be the same,” Pötzlberger says. “It could be that damages occur more often because the technologies are more efficient. But the grounds for liability will be the same.”

And the coverage is the same, too, for now at least. “The default is likely to be that AI is covered,” he adds.

That could change with the EU’s forthcoming Liability Directive, a companion to the AI Act that will introduce new rules regarding damages caused by AI systems—essentially making it easier for individuals to sue if they suffer damages from an AI system.

While it’s not known when the directive will be finished and go into effect, it’s likely to “enable a significant number of legal claims from individuals,” Dawson says. “And that will put liability pressure on insurers.”

That’s driving development of new insurance products offering companies protection against AI-related legal actions. Armilla expects to introduce its own liability-focused product later this year.

In general, we expect the range of AI covers to increase, as the compliance and regulatory environment firms up and as the scope and extent of silent AI coverage is tested in the market.
mICHAEL bERGER, Head of Insure AI, Munich Re

“We are hearing about other companies developing such products as well,” says Jerry Gupta, head of AI products & insurance at Armilla. Given the time it can take to bring new insurance products to market in the United States, “I think we probably will start seeing a lot more similar products by the end of 2025,” he says.

Munich Re is also focused on new products for AI risk. For example, Berger says his team is exploring an insurance product that affirmatively covers IP infringement risks for all parties, from companies using artificial intelligence to vendors of AI models. “AI systems can produce outputs that infringe copyright, in the sense that they bear sufficient objective similarity to an original work. This risk is inherent to these models, as they are trained on existing content. Infringement can happen in all forms of media: images, video, music, and text.” Berger says. “Our policies aim to bring clarity of coverage during this transitional phase of AI adoption.”

For insurers and brokers that use AI within the European Union, the new law will create a special kind of burden that should also lead to new products.

“Each actor needs to be sure that the previous actor has complied with the AI Act,” Haie says. “So the provider of the AI system has to comply with the Act, and whoever is importing the system into the EU will have to make sure the provider has complied. And the deployer needs to make sure that the importer and the provider have complied with their respective obligations.”

That will ultimately lead to creation of an AI insurance market. “We’re already seeing some AI policies being written around that,” Cohen says. “They have not yet intersected with regulatory obligations, but we’ll clearly get there.”

Berger offers a similar thought: “In general, we expect the range of AI covers to increase, as the compliance and regulatory environment firms up and as the scope and extent of silent AI coverage is tested in the market.”

For now, AI cover can come through existing errors and omissions policies, and through “policies that specifically cover the malfunctioning of technology products that a company is dependent on,” Clark says.

Specialized policies could arrive in the market as well. “If there are specific AI systems that are really critical to a company, you could see that company wanting a specialist bespoke cover specifically to address the risk that there is an error or omission by that critical AI system—outside of it being covered by a general liability policy,” he adds.
Munich Re is already applying this idea. “Given the rapid pace of advancement in the field, we routinely collaborate with our clients to offer custom solutions to protect them against the AI risks that are most relevant to their industry, business model, and AI technology they use,” Berger says.

‘Necessary Guardrails’

While insurers and brokers have generally reacted positively to the EU’s new AI law, some see the potential for unintended business impacts on the industry.

“We like the AI Act because it really provides the necessary guardrails to use AI responsibly,” says Philipp Räther, group chief privacy and AI ethics officer at Allianz, which he says has been developing and using AI systems for years. Under the new law, “As a business, you now know what to do and what not to do.” That clarity enhances trust in the application of AI, Räther adds.

Early in development of the legislation, there was talk that “the entire insurance sector would have to be considered as a high-risk area, with all the intensive regulations that come with this classification,” Pötzlberger says. That shifted during negotiations, and now only the life and health insurance sectors face the strictest regulations.

There is still some resistance, Körner says. Some trade groups, for instance, “have been a bit critical about the limitations on the use of AI in health insurance.”

The German Insurance Association, for one, believes existing laws are generally sufficient. “AI systems for premium calculation, underwriting and claims settlement in the insurance sector are subject to strict requirements by general laws as well as the stringent regulatory framework for financial services,” the group said in a statement. “Supplemental regulation should be considered only for high-risk AI applications.”

Others worry that the law could encourage more consolidation among brokers. “The concern is that a smaller broker may say, ‘You know what, that’s too much. I can’t afford yet another regulation,’ and then give up and sell to somebody,” Körner says.

It could also limit business opportunities within the EU.

“I wouldn’t be surprised if, in those high-risk areas like health and life insurance, we see less innovation and less automation in the EU versus other markets, because of the complexities of being innovative in the EU while complying with the AI Act,” says Clark.

There’s precedent for that, stemming from the General Data Protection Regulation (GDPR), the EU’s landmark 2016 law around information privacy. Some companies today offer different products and services outside the EU versus what they offer inside the Union, “because they think the risks of doing so while trying to comply with GDPR are too high,” according to Clark. “That’s an existing construct that we could see also being applicable in relation to the AI laws.”

AI regulation is coming to the United States as well, although the federal government has moved slowly in comparison to the EU—issuing an executive order controlling how federal agencies use the technologies is the farthest-reaching regulation so far.

“States are rushing to fill this perceived gap that’s being left by the lack of a national AI law,” says Kevin Allison, president of Minerva Technology Policy Advisors. That includes California, where the state legislature in late August approved a bill requiring developers of advanced AI models to adopt certain safety measures.

In May, Colorado became the first state to enact comprehensive AI regulations with a new law that takes effect in 2026. Like the EU AI Act, it takes a risk-based approach to regulating both the development and deployment of artificial intelligence systems.

Connecticut was working on a similar bill that would have regulated the development and use of AI systems. But the legislation’s sponsors pulled it earlier this year after Gov. Ned Lamont said he would veto it because he would prefer to see states work together on AI regulation, rather than each developing its own rules. He also expressed concerns about his state being one of the first to implement such wide-ranging regulations.

Expect more such bills to come. “I think states are very much looking at this,” Westby says. She also thinks the EU AI Act is likely to continue to be a model for many state lawmakers and regulators. “They will look at a well-written and established law, especially because the EU is going to be gaining experience on regulating AI ahead of everybody else.”

Allison, though, favors a different tack. “The current U.S. approach—putting the focus on enforcing your existing rulebook first, and then looking to identify gaps and try to figure out more measured and targeted laws to those gaps— seems to be a little smarter in my view,” he says. “I think it’s more likely to work.”

Chris Larson contributing writer, Leader's Edge Read More

More in Industry

The Opportunity to Watch and Learn
Industry The Opportunity to Watch and Learn
Council board chair Keith Schuler offers insights gained from working closely wi...
Industry Best in Class
Never losing sight of the customer drove Ken Crerar through three decades of suc...
The Rules of Reconciliation
Industry The Rules of Reconciliation
Reconciliation has been a powerful tool for passage of major legislation, but it...