Careers

Learn more

Qualified professionals

Learn more

Trainee & intern programmes

Learn more

Offices

New York

Learn more

San Francisco

Learn more
A&L Goodbody logo
AI Act impact for Insurers

Insurance & Reinsurance

AI Act impact for Insurers

The European Union’s Artificial Intelligence Act entered into force on 1 August 2024 and will become effective in phases over the next 24 to 36 months.

Wed 07 Aug 2024

8 min read

The European Union’s Artificial Intelligence Act (AI Act) entered into force on 1 August 2024 and will become effective in phases over the next 24 to 36 months. As has been widely reported, it will have an enormous impact on a whole range of sectors, but arguably few more so than the insurance sector.

In this article, we have considered how the AI Act will impact insurers and what needs to be done now to prepare for gradual implementation. 

What’s special about insurers?

To compete successfully, insurers are constantly innovating and tailoring product offerings in response to customer needs and behaviours. A recent EIOPA report[1] indicated that 50% of non-life and 24% of life insurers who responded to an EU market wide survey, use some form of AI in their business. A look at many insurers’ public statements on their use of AI would tend to suggest that those percentages may be understated. Even if they are not, those figures will have increased in the last few months and will continue to increase significantly as months and years pass.

The Central Bank of Ireland (CBI) is monitoring insurers’ (and other financial institutions’) approaches to the use of AI. In its recently published draft Guidance on Securing Customers’ Interests[2], it has made it clear that it will intervene where firms seek to unfairly exploit or take advantage of consumer behaviours, habits, preferences, or biases to benefit the firm in a way that causes customer detriment. This separate regulatory dimension to the need to mitigate AI-related risks gives increased impetus to insurers’ planning for AI Act compliance.

What’s new?

In our December 2023 update, we highlighted some key insurer-related dimensions of the then proposed AI Act. Very little (beyond minor drafting corrections) has changed since, beyond the timelines for its application now being set in stone.  

Insurers carrying out “high risk” AI activities

Life and health insurers are called out in the AI Act as potentially in-scope for wide ranging technical and governance measures in how they carry out those activities. High risk activities include the use of AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. However, in light of Ireland’s community rated[3] domestic health insurance market, it seems more likely that only life insurers will have obligations in this particular context.  

Where an insurer is engaging in a high risk AI activity, it will be required to comply with a wide range of technical and governance measures. An insurer’s obligations can stem from it being treated as:

       a) a provider of AI (i.e. where it develops a piece of AI software internally)

       b) as a deployer of AI (where it uses an externally developed AI system for business functions, which is more typically the case)

High risk AI systems will be subject to strict requirements before being deployed on the market. These will include:

Certain of these requirements overlap heavily with existing and anticipated consumer protection requirements; for example the anticipated digitalisation-related requirements of the revised Consumer Protection Code and related Standards. Insurers may consider progressing steps for compliance with both, in tandem. 

The European Commission is also due to release further guidance on high risk AI systems by 2 February 2026, including a list of examples of uses that are high risk and those which are not.

Insurers using high risk AI systems will be required to implement a robust risk management framework; or more likely, adapt their existing risk management frameworks to ensure full compliance.  This will involve risk assessments both before and after the AI system is put into use and will include a risk assessment of the system’s potential impact on the health, safety and fundamental rights of any person impacted by the system. Insurers will have to document any risk management measures taken in relation to high risk AI systems to reduce any risks identified. 

Human oversight measures are also a key focus of the AI Act in relation to high risk AI systems. Insurers that deploy a high risk AI system must ensure that a natural person can appropriately monitor the system and be capable of identifying and addressing any dysfunctions that arise. Aside from AI Act consequences (see below) of failing to do this, Solvency II (oversight, governance and risk) and separate consumer-related breaches may occur where this does not happen. This particular obligation may therefore be a focus of the CBI.

Where any person is affected by a decision of a high risk AI system, that person has a right to have a “clear and meaningful” explanation of the role of the AI system in the making of that decision. As a result, insurers will need to ensure that their use of AI systems is carefully logged.

Of broader application: General purpose AI

There will also be obligations stemming from the use of general-purpose AI (GPAI). GPAI is AI that can be used in and adapted to a wide range of applications, and which can handle many different tasks (for example, image and speech recognition, audio and video generation, question answering and many others). Examples of GPAI that may be used by many insurers include dialogue generation for virtual assistants (i.e. chatbots), pricing (through analysing historical data) and marketing / sales content generation.

GPAI will be regulated on a two-tier system – low impact and high impact. For low impact GPAI models, various transparency requirements will apply, including drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training. For high impact GPAI models (which involve extremely high levels of computing power with the potential to cause systemic risk), more onerous obligations will apply, including model evaluations, assessment and mitigation measures, adversarial testing requirements and cybersecurity protections / reports.

In each case, an insurer deploying GPAI must inform any users of the GPAI system that they are engaging with an AI system and must ensure that staff interacting with the system must have sufficient literacy and training. These customer communication and staff competency obligations are unsurprising and as a theme will resonate with most insurers. 

AI Act penalties

The AI Act provides for the potential of significant fines where in-scope businesses fall short. The precise circumstances in which fines will be imposed will be set out in national implementing legislation. For now, the following broad principles are set out in the AI Act:

As already highlighted, where an insurer breaches the AI Act, it is likely that it will also breach other existing financial services-related regulation and bring itself in scope for CBI sanction as a result.

AI Act Timeline

The AI Act will become applicable on a phased basis:

While the AI Act has direct effect as an EU regulation, it also requires individual member states to legislate for supervision and enforcement of the AI Act at a national level. These measures will include the selection of a national competent authority to ensure compliance with the AI Act.

Next Steps for Insurers

Insurers have many significant regulation-driven projects underway and pending (DORA, CPC, IAF and climate-risk related projects, to name a few). The AI Act must now feature too.

We suggest that the following are some key steps that insurers can take now to prepare for the gradual application of their obligations under the AI Act over the next three years:

For a more detailed breakdown of the provisions of the AI Act, please see the detailed guide to the AI Act prepared by our Technology group who are experts in all aspects of AI regulation.  

For advice on any aspect of this article, please contact Laura Mulleady, Partner, Sinéad Lynch, Partner, Niall Guinan, Associate or any member of ALG’s Insurance & Reinsurance team.

Date published: 7 August 2024

[1] EIOPA-BoS-24/139 - Report on the Digitalisation of the European Insurance Sector – 30 April 2024

[2] Published in March 2024

[3] Everyone pays the same premium for a particular level of health insurance coverage, regardless of individual factors like age, health or gender. 

Key Contacts