AI Act - Insurance spotlight
The European Union’s Artificial Intelligence Act (AI Act) is one of the first attempts from a major regulator to address the growing prevalence of AI technologies. Businesses that develop, use and supply AI systems will be impacted. Importantly, this includes insurance sector firms and service providers.
On 9 December 2023, the European Parliament and the Council on the AI Act reached political agreement on the Act. The next step is for the AI Act to be formally voted on by the European Parliament and Council and for the finalised text to be published in the Official Journal.
This short article highlights some key insurance dimensions of the proposed AI Act.
Insurance AI
Resisting calls for confining the scope of what constitutes AI, the definition settled in the AI Act is far reaching. It reflects the recently updated definition from the Organisation for Economic Co-operation and Development (OECD):
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
The goal was to future proof the Act and attempt to capture AI not yet developed.
AI is used in the insurance industry at all levels, from claims automation, fraud detection solutions, chatbots and virtual insurance agents. As AI continues to develop and evolve, its use and uptake within the insurance sector is expected to rapidly increase.
Insurance activities – high or low risk?
Certain insurance related activities are categorised as high risk in the Act. Others (depending on the nature of the activity) can fall into a variety of other categories.
The AI Act introduces a four tier categorisation system for risk, ranging from unacceptable to minimal. The level of protection is linked to the level of risk posed. The four tiers are as follows:
- Unacceptable risk: Considered a clear threat to fundamental rights. Will be banned. For example, social scoring.
- High risk: Constitute a high risk to health and safety or fundamental rights. Authorised but subject to stringent restrictions. For example, automated insurance claims.
- Limited risk / specific transparency risk: Interact with humans. Must meet specific transparency obligations. For example, chatbot systems.
- Low or minimal risk: Present minimal or no risk for citizen’s rights or safety. Face no regulation. For example, spam filters.
High risk insurance activities
High risk AI systems under the Act have been described by the European Parliament as including those that make (or materially influence) decisions on individuals’ eligibility for health and life cover.
Insurance companies will have to comply with a wide range of technical and governance measures for all high risk AI activities.
Measures are expected to include risk-mitigation systems, high quality data sets, logging of activity, detailed documentation, clear user information, human oversight:- and a high level of robustness, accuracy and cybersecurity. In addition, a mandatory fundamental rights impact assessment will need to be undertaken. A complaint mechanism will also apply this category (and, on foot of complaints, individuals can request explanations about decisions based on high-risk AI systems which impact their rights).
Obligations stemming from high risk activities can be triggered by insurance firms roles as (a) deployers (essentially, users of AI systems) and (b) providers (for example, where they develop AI systems internally).
General Purpose AI Systems
Many AI activities by insurance sector firms (and service providers) could be classified as General Purpose AI (GPAI). For example, dialogue generation for virtual assistants, optimised underwriting and pricing (through analysing historical data) and marketing / sales content generation.
GPAI is AI that can be used in and adapted to a wide range of applications and which can handle many different tasks (for example, including image and speech recognition, audio and video generation, question answering and many others).
The AI Act proposes a two tier regulation system for GPAI (depending on whether the GPAI is high impact or low impact). For low impact GPAI models, various transparency requirements would apply. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training. For high impact GPAI models (with systemic risk), more onerous obligations apply. These include model evaluations, assessment and mitigation measures, adversarial testing requirements and cybersecurity protections / reports.
Penalties
Penalties are proposed to be dependent on the type of violation.
Under the latest framework agreed, violations of the banned AI applications would attract fines of €35m or 7% of the offending company’s worldwide annual turnover for the previous financial year, whichever is higher. Protection for SMEs and start-ups are built into the AI Act with more proportionate caps in place for their infringements.
Looking forward
The sphere of AI is evolving, both in its own development and in terms of regulation. Supervision of both will remain a key priority for the insurance sector. EIOPA, in its Supervisory Convergence Plan for 2024, committed to further develop its work on governance and risk management of specific AI (taking the Act into account).
While the core elements of the AI Act were agreed in December, the next steps are for the European Parliament and Council to formally vote on it. This is expected in early 2024.
For advice or for further information on this topic, please contact Sinéad Lynch, Partner, or any member of the ALG’s Insurance & Reinsurance team.
Date published: 22 December 2023