EU AI Act and Financial Services: Compliance Guide for Banks and Insurers
How the EU AI Act affects banks, insurers, and financial institutions. Covers credit scoring, fraud detection, AML/KYC, insurance pricing, and dual compliance with DORA, MiFID II, and PSD2.
Financial Services Face the Highest AI Compliance Stakes
No sector uses more AI than financial services, and no sector faces more regulatory overlap when the EU AI Act's full enforcement arrives in August 2026. Banks, insurers, payment providers, and investment firms have spent years building AI into their core operations — credit scoring, fraud detection, anti-money laundering, insurance underwriting, algorithmic trading, and customer service automation.
Most of these AI applications fall squarely into the EU AI Act's high-risk category. That means extensive obligations around risk management, transparency, human oversight, and documentation. But financial institutions cannot comply with the AI Act in isolation. They must also satisfy DORA, MiFID II, PSD2, Solvency II, and the existing GDPR framework — all of which have their own requirements for algorithmic systems.
This guide maps out exactly where the EU AI Act hits financial services, what dual compliance looks like in practice, and how to build a compliance program that satisfies every regulator at once.
Credit Scoring: The Flagship High-Risk Use Case
What the AI Act Requires
Credit scoring is explicitly listed in Annex III, Category 5(b) as a high-risk AI application. The regulation covers AI systems used to evaluate the creditworthiness of natural persons, which includes any automated system that assesses whether someone qualifies for a loan, credit card, mortgage, or other financial product.
This is not limited to fully automated credit decisions. If an AI system generates a score or recommendation that a human then uses to make a lending decision, the AI system itself is still high-risk. The fact that a human reviews the output does not remove the system from scope — it simply means you are meeting part of the human oversight requirement.
Under Article 9, your credit scoring AI must have a documented risk management system that runs throughout the system's entire lifecycle. This includes identifying risks to fundamental rights (particularly non-discrimination), implementing mitigation measures, and testing the system against those risks on an ongoing basis.
The GDPR Overlap
Credit scoring already faces regulation under GDPR Article 22, which gives individuals the right not to be subject to purely automated decisions with legal or similarly significant effects. The AI Act layers on top of this. You need both the GDPR safeguards (human review on request, right to explanation) and the AI Act requirements (proactive risk management, technical documentation, conformity assessment).
The practical difference: GDPR focuses on the data subject's rights after a decision is made. The AI Act focuses on the system design and operation before and during decision-making. You need to satisfy both directions simultaneously.
Practical Steps for Credit Scoring Compliance
Start with a thorough risk assessment of every credit scoring model in your portfolio. For each model, document:
- The training data used and how bias was tested for across protected characteristics (gender, ethnicity, age, disability, nationality)
- The model architecture and decision logic (even if it is a black-box model, you need to document the inputs, outputs, and known behavioral patterns)
- Performance metrics broken down by demographic groups, not just overall accuracy
- The human oversight mechanism — who reviews the model's outputs, how often, and what authority they have to override
- Fallback procedures when the AI system is unavailable or produces anomalous results
If you have not already classified your credit scoring AI, use the classification tool to confirm its risk level and identify the specific obligations.
Fraud Detection and Transaction Monitoring
Navigating the Risk Classification
Fraud detection sits in a nuanced position under the AI Act. Not all fraud detection AI is automatically high-risk. The classification depends on what the system does and who it affects.
An AI system that flags suspicious transactions for review by a human analyst, where the human makes the final decision on whether to block or report the transaction, may fall under limited risk rather than high risk — provided the system does not make consequential decisions autonomously. However, systems that automatically block transactions, freeze accounts, or trigger alerts to authorities cross into high-risk territory because they directly affect the person's access to financial services.
The critical distinction is whether the AI system's output leads to decisions that significantly affect individuals. An internal analytics dashboard that helps analysts prioritize their workload is different from a system that automatically declines transactions in real time.
PSD2 Interaction
Payment service providers subject to PSD2 already have strong customer authentication (SCA) requirements and must ensure that their security measures do not discriminate. The AI Act adds a layer: if your fraud detection AI disproportionately flags transactions from certain demographic groups, you face both a PSD2 concern (access to payment services) and an AI Act concern (discrimination in a high-risk system).
When building your compliance documentation, map your fraud detection systems against both PSD2's security requirements and the AI Act's non-discrimination obligations. The documentation you produce for one regulation can serve the other, but only if you design it that way from the start.
Anti-Money Laundering and Know Your Customer
The Compliance Triangle: AI Act + AMLD + GDPR
AML/KYC is one of the most complex areas for AI compliance because three major regulatory frameworks converge. Anti-Money Laundering Directives (AMLD 5 and the upcoming AMLD 6) require financial institutions to use risk-based approaches to detect money laundering. GDPR governs the processing of personal data involved. And the AI Act now regulates the AI systems used to perform these functions.
AI systems that assess individuals' risk profiles for money laundering purposes could be classified as high-risk under Annex III if they produce outputs that significantly affect access to financial services. An AI system that generates a risk score used to determine whether to onboard a customer, maintain an account, or file a suspicious activity report is making assessments that directly impact individuals.
What This Means in Practice
Your AML/KYC AI needs:
- Data quality documentation (Article 10): The training data must be relevant, representative, and as free from errors as possible. For AML systems, this means documenting the sources of training data, how false positives and false negatives were balanced, and how the system performs across different customer segments.
- Transparency obligations (Article 13): Deployers must be able to understand the AI system's output and use it appropriately. Your compliance officers need documentation that explains what the AI's risk scores mean, their confidence levels, and their known limitations.
- Record-keeping (Article 12): The system must automatically log its operations to a sufficient degree to allow for traceability. For AML, this means logging every risk assessment, the inputs used, and the output produced — which aligns with existing regulatory expectations for audit trails.
- Human oversight (Article 14): A human must be able to understand, intervene in, and override the AI system's outputs. For AML compliance, this means your compliance team must have the tools, training, and authority to challenge the AI's risk assessments.
Run a gap analysis to identify where your existing AML documentation falls short of the AI Act's requirements. Many financial institutions already have robust AML programs — the work is often about extending existing documentation rather than starting from scratch.
Insurance Pricing and Underwriting
Why Insurance AI Is High-Risk
AI systems used to set insurance premiums, assess claims, or determine coverage eligibility are high-risk under Annex III, Category 5(a). The regulation explicitly covers AI systems intended to be used for the evaluation and pricing of health and life insurance.
This captures a wide range of insurance AI applications:
- Premium calculation models that use customer data to set prices
- Risk scoring systems that determine whether to offer coverage
- Claims assessment AI that evaluates whether a claim should be paid
- Fraud detection in claims processing (which overlaps with the fraud detection category above)
Solvency II Alignment
Insurers subject to Solvency II already have model validation requirements for their pricing and reserving models. The AI Act's requirements overlap significantly with Solvency II's Pillar 2 governance requirements, particularly around model risk management, internal controls, and documentation.
The key difference is scope. Solvency II focuses on the financial soundness of the insurer. The AI Act focuses on the impact on individuals. An insurance pricing model might be perfectly sound from a Solvency II perspective (accurate loss predictions, adequate reserves) while still violating the AI Act if it discriminates against protected groups or lacks adequate transparency.
Bias in Insurance Pricing
The AI Act's non-discrimination requirements create particular challenges for insurance. Insurance has always been about differentiation — charging different prices based on risk factors. The challenge is ensuring that AI-driven pricing does not use protected characteristics as proxies.
For example, an AI model that uses postcode as a pricing factor might indirectly discriminate based on ethnicity if certain postcodes correlate strongly with ethnic demographics. Under the AI Act, you must test for and document these proxy discrimination risks, and implement measures to mitigate them.
Document your approach to fairness testing in your risk management system. Include the protected characteristics you test for, the metrics you use (demographic parity, equalized odds, calibration across groups), and the thresholds you consider acceptable.
Algorithmic Trading and Investment AI
MiFID II and the AI Act
Algorithmic trading has been regulated under MiFID II since 2018. Investment firms using algorithmic trading must have effective systems and risk controls, maintain records of all orders and trades, and ensure their algorithms do not create disorderly market conditions.
The AI Act adds requirements that go beyond MiFID II's market integrity focus. If an AI system is used to make investment decisions that affect individual clients — such as robo-advisory platforms, automated portfolio management, or AI-driven investment recommendations — the system may be classified as high-risk because it affects individuals' access to essential financial services.
However, AI systems used purely for proprietary trading (where the firm trades on its own account without directly affecting individual clients) may not fall under the high-risk classification, as the impact is on the firm itself rather than on natural persons.
Practical Compliance Approach
For investment firms, the compliance approach should be:
- Inventory all AI systems used in trading and investment processes. Use the systems inventory tool to catalogue each system, its purpose, and its interaction with client-facing decisions.
- Classify each system based on whether it directly or indirectly affects individual clients. Systems that generate recommendations used in client interactions are more likely to be high-risk than systems used for internal analytics.
- Map existing MiFID II documentation to AI Act requirements. Your MiFID II algorithm documentation already covers system design, testing, risk controls, and record-keeping. Identify the gaps — typically around bias testing, transparency to end users, and fundamental rights impact assessment.
- Implement incremental controls rather than building from scratch. The AI Act requirements for risk management, testing, and documentation can be layered onto your existing MiFID II compliance framework.
DORA: The Digital Operational Resilience Overlay
How DORA Interacts with the AI Act
The Digital Operational Resilience Act (DORA) took effect in January 2025 and applies to virtually all financial entities in the EU. DORA focuses on ICT risk management, incident reporting, resilience testing, and third-party risk management. It does not specifically regulate AI, but it creates a framework that directly affects how financial institutions must manage AI systems.
The interaction works in several ways:
ICT Risk Management (DORA Chapter II): Financial entities must identify, classify, and manage ICT risks. AI systems are ICT systems, so they fall within DORA's risk management framework. Your AI Act risk management system (Article 9) and your DORA ICT risk management framework must be consistent.
Third-Party Risk (DORA Chapter V): If you use third-party AI services (cloud-based credit scoring, outsourced fraud detection, vendor-provided AML tools), DORA requires you to manage those third-party risks. The AI Act also imposes obligations on deployers who use AI systems provided by others. Your vendor management processes must satisfy both sets of requirements.
Incident Reporting (DORA Chapter III): If an AI system malfunctions in a way that affects your operations, DORA requires incident reporting. The AI Act requires reporting serious incidents involving high-risk AI systems (Article 72). You need a single incident management process that satisfies both obligations.
Resilience Testing (DORA Chapter IV): DORA requires regular testing of ICT systems, including advanced testing for systemically important institutions. The AI Act requires ongoing monitoring and testing of high-risk AI systems. Coordinate these testing programs to avoid duplication.
Building a Unified Compliance Framework
The most efficient approach for financial institutions is to build a single compliance framework that satisfies all applicable regulations simultaneously. This avoids duplicating effort and reduces the risk of inconsistencies between separate compliance programs.
Start with an inventory of all AI systems across the organization. For each system, map the applicable regulations:
| AI Application | AI Act | DORA | MiFID II | PSD2 | AMLD | Solvency II | GDPR |
| ----------------- | ------------ | -------- | -------- | ---- | -------- | ----------- | ------------------- |
| Credit scoring | High-risk | ICT risk | — | — | — | — | Art. 22 |
| Fraud detection | Varies | ICT risk | — | SCA | — | — | Legitimate interest |
| AML/KYC | High-risk | ICT risk | — | — | AMLD 5/6 | — | Legal obligation |
| Insurance pricing | High-risk | ICT risk | — | — | — | Pillar 2 | Art. 22 |
| Algo trading | Varies | ICT risk | Art. 17 | — | — | — | — |
| Robo-advisory | High-risk | ICT risk | Art. 25 | — | — | — | Art. 22 |
| Chatbot/support | Limited risk | ICT risk | — | — | — | — | Various |
This mapping exercise immediately reveals where compliance efforts can be consolidated. Risk management documentation, testing frameworks, and incident reporting processes can serve multiple regulations if designed with all of them in mind.
Documentation Requirements for Financial AI
What Regulators Will Expect
Financial regulators are experienced at reviewing documentation. When the AI Act comes into full effect, they will expect the same rigor applied to AI systems that they currently expect for financial risk management.
For each high-risk AI system, you must produce and maintain:
Technical Documentation (Article 11 and Annex IV):
- General description of the AI system, including its intended purpose
- Detailed description of the development process, including design choices and model architecture
- Information about training, validation, and testing data
- Performance metrics across relevant subgroups
- Description of the risk management measures implemented
Instructions for Use (Article 13):
- Clear explanation of what the system does and its limitations
- The level of accuracy, robustness, and cybersecurity the system was designed for
- Known circumstances that might affect performance
- Human oversight measures and how to exercise them
Quality Management System (Article 17):
- Processes for regulatory compliance
- Data management procedures
- Design, development, and testing procedures
- Post-market monitoring processes
The compliance documentation tools can help you structure this documentation in a way that satisfies both the AI Act and sector-specific regulations.
Timeline and Priorities for Financial Institutions
What to Do Now
With the August 2026 deadline approaching, financial institutions should prioritize their compliance efforts:
Immediate (Q1 2026):
- Complete an AI systems inventory across all business lines
- Classify every AI system using the classification tool
- Identify high-risk systems that need full compliance programs
- Map regulatory overlaps for each system
Short-term (Q2 2026):
- Begin risk management documentation for high-risk systems, starting with those that most directly affect individuals (credit scoring, insurance pricing)
- Conduct bias testing on all high-risk AI systems
- Establish human oversight mechanisms with clear escalation procedures
- Review third-party AI vendor contracts for compliance obligations
Pre-deadline (Q3 2026):
- Complete technical documentation for all high-risk systems
- Run conformity assessments
- Register high-risk systems in the EU database
- Train all relevant staff on AI Act obligations
Ongoing:
- Post-market monitoring of all high-risk systems
- Regular bias audits and performance reviews
- Incident reporting procedures aligned with both AI Act and DORA
- Periodic review and update of risk management documentation
The Cost of Non-Compliance
For financial institutions, non-compliance carries risks beyond the AI Act's fines (up to EUR 35 million or 7% of global turnover for the most serious violations). Financial regulators may view AI Act non-compliance as evidence of inadequate governance, which could trigger supervisory actions under existing financial regulation. Reputational damage in financial services — where trust is the core product — can be even more costly than regulatory penalties.
The Advantage of Starting with the Finance Sector
Financial services firms have a significant advantage in AI Act compliance: they already have mature compliance cultures, experienced risk management teams, and established documentation practices. The AI Act's requirements are extensive, but they are not unfamiliar in structure. Risk management, testing, documentation, human oversight, and audit trails are concepts every financial institution already understands.
The challenge is adaptation, not invention. Map the AI Act's requirements onto your existing frameworks, identify the gaps, fill them systematically, and maintain the documentation as your AI systems evolve.
Use the financial services compliance tools to get a tailored view of your obligations, or start with a free assessment to understand where your organization stands today. For a detailed look at how risk assessment works under the AI Act, read our AI risk assessment guide.
The compliance cost calculator can help you estimate the investment required and plan your budget accordingly. The institutions that start now will find the August 2026 deadline manageable. Those that wait will find it anything but.