Unlock the future of insurance with our complimentary educational workshop diving deeply into AI and Machine Learning.
Combining a technological and practical approach to deliver actuarial and risk modelling solutions
Gary is a Principal Consultant within our insurance practice in Dublin. He has 16 years of experience within the life and non-life (re)insurance sectors covering industry, audit and consultancy roles. His expertise covers financial reporting, prudential and conduct risk management, and assurance activities. Gary has provided outsourced actuarial, risk, compliance, and internal audit function services for a wider range of insurers, reinsurers and captives.
As artificial intelligence reshapes the insurance landscape, Boards and control functions grow increasingly cautious about the risks posed by rapidly evolving technologies. At the same time, innovators grow frustrated as outdated governance frameworks stifle progress. Striking a balance between innovation and oversight begins with the fundamentals - establishing a well-structured AI Risk Management Policy. But where should insurers start? This article offers a practical guide to developing an AI Risk Management Policy while also exploring key insights from EIOPA’s recent Opinion on Artificial Intelligence and Risk Management.
AI models are inherently complex, and unclear terminology only adds to the challenge. A strong AI Risk Management Policy should establish clear, company-wide definitions of key terms to prevent misunderstandings and misapplications.
Beyond definitions, the Policy should outline which AI systems fall within its scope. This may include underwriting and reserving models, claims automation tools, customer chatbots, and other AI-driven processes.
Importantly, the Policy must specify whether it applies solely to in-house AI or also covers third-party AI solutions integrated into the insurer’s operations.
While a standalone AI Risk Management Policy may be necessary for many insurers, it should not exist in isolation from the broader Enterprise Risk Management Framework.
Insurers should consider whether the AI Risk Management Policy functions as a subset of the overarching Risk Management Policy and how it integrates with existing governance structures. Crucially, its development must align with the firm’s Risk Appetite and internal controls framework.
Like any governance document, the AI Risk Management Policy should clearly define how often it is updated, who is responsible for maintaining it, and whether approval rests with the Board or a designated subcommittee, such as the Risk Committee.
While an annual review is standard practice for most policies, the rapid evolution of AI and insurance use cases may necessitate more frequent updates.
The Board holds ultimate accountability for AI strategy, AI risk management, and oversight. The AI Risk Management Policy should clearly define any responsibilities delegated to its sub committees.
For larger and more sophisticated insurers, appointing a Chief AI Officer may become standard practice. Where such role exists, the Policy should clearly determine the extent of risk management responsibilities expected. In some cases, firms may establish a dedicated AI Committee based on the nature, scale, and complexity of their AI-driven operations.
The policy should also recognise the responsibilities of key role holders in AI risk management:
Insurers should maintain a comprehensive inventory of all AI systems in use. At a minimum, the inventory should classify AI systems by level of risk in keeping with Articles 5 and 6 of the AI Act:
While the inventory itself does not need to be included within the AI Risk Management Policy, the Policy should clearly define:
Although multiple stakeholders may contribute to maintaining the inventory, best practice is to assign ownership to a designated individual to ensure consistency and accountability.
The AI Act has introduced significant obligations for providers and deployers of High-Risk AI systems. However, EIOPA’s recent Opinion on Artificial Intelligence and Risk Management has highlighted that many existing laws and regulations already apply to AI systems used in insurance. Some of these regulations will require new interpretations as AI adoption increases.
An AI Risk Management Policy must account for the full spectrum of relevant laws, regulations, and governance requirements. The table below highlights key regulatory articles from various EU regulations that insurers should assess.
This list is certainly not exhaustive - insurers must also consider regulations such as GDPR, local consumer protection codes, and guidance from national regulatory authorities. Additionally, some insurers must align with group-level governance requirements, which should be explicitly addressed in the AI Risk Management Policy.
The following sections explore the topics introduced above in further detail.
AI risk management systems should be proportionate to the nature, scale, and complexity of the AI system in question. For example, a black-box neural network used to underwrite life insurance policies at scale poses significant risks to both customers and the insurer’s financial stability. In contrast, an AI-powered email spam filter carries minimal risk and is unlikely to cause widespread adverse outcomes.
Proportional risk management is already an established principle in insurance regulation. The Solvency II Directive (Article 41), Insurance Distribution Directive (Article 25), and DORA (Articles 4, 5, and 6) all emphasise the need for governance systems that reflect the scale and complexity of the risks they manage.
Once an AI system inventory is created, insurers will likely identify multiple AI systems with varying levels of risk. In practice, a case-by-case proportionality assessment may be required. The AI Risk Management Policy should establish a structured framework to evaluate the risk level of each AI system and provide guidelines on appropriate risk management measures.
The risk assessment framework may consider factors such as:
Article 9 of the AI Act outlines risk management requirements for High-Risk AI systems. While these requirements are mandatory for high-risk AI, insurers may choose to apply certain elements of this framework to other AI systems as well. Notably, Article 9 requires risk management processes to operate across the full lifecycle of AI models, ensuring continuous oversight from development to deployment and beyond.
Article 17 of the Insurance Distribution Directive (IDD) mandates that insurance distributors must always act honestly, fairly, and professionally in the best interests of their customers.
To uphold these principles, the AI Risk Management Policy should include:
The policy should further align with relevant fairness regulations and codes of conduct. For instance, the 2012 EU Court of Justice ruling on Gender Rating prohibits the use of gender as a premium rating factor. Compliance with this ruling has been relatively straightforward using traditional pricing models, but black-box AI models may present new challenges. Advanced AI systems may develop proxy rating factors that indirectly reflect gender, even when gender is not explicitly included in the training dataset.
Additionally, the EIOPA Opinion highlights concerns about differential pricing practices, emphasising the need for insurers to develop strategies to mitigate such risks and ensure fair treatment of consumers.
Most insurers have well-established data governance practices with several regulatory frameworks already setting data quality and governance standards.
Article 10 of the AI Act sets new data governance obligations for High-Risk AI systems. Depending on an insurer’s risk profile, some of these requirements may also be adopted for other AI systems as a best practice.
At a minimum, the AI Risk Management Policy should establish strong governance standards for data used in training and testing AI models, ensuring appropriate measures are in place to eliminate bias. While data cleansing and transformation can help reduce bias, insurers should implement controls to prevent unintended management bias from influencing the data transformation process.
The policy should clearly state that data governance standards apply to both internal and externally sourced data.
Finally, insurers must ensure that the AI Risk Management Policy aligns with existing Data Protection & Privacy Policies to maintain regulatory compliance and avoid conflicts between governance frameworks.
Maintaining detailed records is essential for accountability and auditability, as outlined in Article 258 of the Solvency II Delegated Regulations and Article 9 of the Product Oversight & Governance requirements.
High computational power and automation of AI systems may render traditional manual record-keeping protocols insufficient. To address this, Article 12 of the AI Act mandates that High-Risk AI systems must automatically log key information, such as input data used and system performance monitoring.
The extent of automatic logging required on other AI models will depend on the nature, scale, and complexity of each AI system. At a minimum, the AI Risk Management Policy should require an "auditability by design" approach for developing and procuring AI models.
Additionally, insurers should establish clear policies for maintaining:
Effective governance regimes rely on measurable and monitorable metrics. AI assisted decision-making must be transparent with underlying models explainable.
The degree of explainability required will vary depending on:
Not all machine learning models offer the same level of interpretability. Neural networks are highly accurate but often lack explainability. Decision trees, in contrast, are typically more transparent and easier to interpret.
The AI Risk Management Policy should outline the insurer’s appetite for using less explainable models and define the conditions under which they may be deployed. For example:
Where less explainable models are used, explainability tools should be leveraged to mitigate risks to within acceptable thresholds. The Policy should define clear criteria for this. EIOPA’s Opinion references widely used explainability tools such as LIME and SHAP, but also stresses the importance of documenting their limitations.
The Policy should also establish strict guidelines for human oversight of less explainable models to prevent unintended biases and errors.
Additionally, the Policy can be strengthened by distinguishing between:
An illustrative Accuracy vs Explainability trade-off of some common machine-learning models
Article 4 of the AI Act mandates that firms ensure an adequate level of AI literacy among their staff. The AI Risk Management Policy should establish a training framework tailored to the needs of employees across different functions.
At a minimum, the Board of Directors must have a sufficient understanding of how AI is used within the organisation and the associated risks. More specialised training programs can be developed based on AI use cases and individual roles.
The Policy should also define expectations for human oversight, particularly in:
Article 258 of the Solvency II Delegated Regulations requires insurers to maintain qualified and competent staff for control functions. The AI Risk Management Policy should specify the necessary AI literacy levels to comply with these obligations.
Additionally, Article 7 of the Product Oversight & Governance rules requires insurers to continuously monitor and review their insurance products. This process becomes more complex when AI models are involved, making it crucial for the Policy to define the appropriate level of human oversight for AI-driven products.
Finally, Article 14 of the AI Act states that High-Risk AI systems must be designed with human-machine interface tools to ensure appropriate oversight and intervention capabilities.
AI systems must be secure, consistent, and reliable throughout their entire lifecycle—not just at deployment.
The Policy should outline:
To mitigate risks, the Policy should establish minimum cybersecurity requirements for AI systems, with heightened protections for models handling large volumes of sensitive data.
For High-Risk AI systems, Article 15 of the AI Act must be followed, ensuring compliance with security and reliability standards.
Additionally, the Policy should address protocols for:
Consistency and reliability should be the foundation of AI models used in insurance. The Policy should set high standards for rigorous testing to ensure AI models can:
Prevent overfitting, particularly in more sophisticated models.
As AI continues to transform the insurance industry, establishing a robust AI Risk Management Policy is essential to balancing innovation, governance, and regulatory compliance. Finalyse recommends that insurers proactively address AI-related risks while ensuring fairness, transparency, and accountability in their AI-driven decision-making processes.
Our team of experts helps firms navigate the complexities of AI risk by offering:
Please reach out to one of our experienced Finalyse consultants at insurance@finalyse.com.