EU AI Act Compliance for Healthcare: What Providers Need to Know
How the EU AI Act affects healthcare organizations. Covers diagnostic AI, triage systems, medical imaging, drug discovery, insurance underwriting, and dual compliance with MDR and IVDR.
Healthcare Is Ground Zero for High-Risk AI
If there is one sector where the EU AI Act hits hardest, it is healthcare. The regulation's risk-based framework treats AI systems that affect human health and safety with the highest level of scrutiny — and for good reason. An AI system that misdiagnoses a patient, miscategorizes a medical image, or assigns an incorrect triage priority can cause direct, serious harm.
For healthcare organizations — hospitals, clinics, private practices, health insurers, pharmaceutical companies, medical device manufacturers, and health tech startups — the EU AI Act creates a dense web of obligations. Many of these overlap with existing medical device regulations, creating dual compliance requirements that demand careful navigation.
This guide covers what healthcare organizations need to know: which AI systems are affected, what the legal requirements are, how the AI Act interacts with the Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), and what practical steps you should take before August 2026.
Which Healthcare AI Systems Are Affected?
Healthcare AI systems can be captured by the EU AI Act through two distinct pathways, and many are caught by both.
Pathway 1: AI as a Medical Device (Article 6(1) and Annex I)
Under Article 6(1), an AI system is automatically classified as high-risk if it is a safety component of a product covered by EU harmonized legislation listed in Annex I — and that product requires a third-party conformity assessment.
Annex I includes both the Medical Device Regulation (EU 2017/745) and the In Vitro Diagnostic Medical Device Regulation (EU 2017/746). This means any AI system that qualifies as a medical device (or an IVD) under these regulations, and that requires a conformity assessment by a notified body (rather than self-assessment), is automatically high-risk under the AI Act.
In practice, this captures a wide range of healthcare AI:
- AI diagnostic tools that analyze patient data to suggest diagnoses
- AI medical imaging systems that interpret X-rays, CT scans, MRIs, or pathology slides
- AI-powered clinical decision support that recommends treatments or flags risks
- AI monitoring systems that analyze vital signs and predict deterioration
- AI laboratory systems that analyze blood tests, genetic data, or other biological samples
If the AI component is integral to the device's safety function, it is high-risk under the AI Act through this pathway.
Pathway 2: Standalone High-Risk Under Annex III
Even if a healthcare AI system does not qualify as a medical device, it may still be high-risk under Annex III of the AI Act. Two categories are particularly relevant:
Category 5: Access to and Enjoyment of Essential Private Services and Public Services and Benefits. This category captures AI systems used for:
- Risk assessment and pricing in life and health insurance
- Evaluating eligibility for public health benefits or services
- Prioritizing emergency dispatching services (including medical emergencies)
Category 1: Biometric Identification and Categorisation. If a healthcare system uses biometric categorization (for example, AI that categorizes patients based on physical characteristics for triage or treatment pathway decisions), it may be captured here.
What About Minimal-Risk Healthcare AI?
Not all AI used in healthcare is high-risk. AI tools used for purely administrative or operational purposes — appointment scheduling optimization, hospital capacity planning, supply chain forecasting for medical supplies, or AI-powered transcription of clinical notes — are likely minimal risk, provided they do not influence clinical decisions that affect patient health.
However, the boundary is not always clear. An AI transcription tool that also generates clinical summaries could influence care decisions if clinicians rely on those summaries. Context and usage matter as much as the tool's stated purpose. When in doubt, our risk classification tool can help you determine where your systems fall.
Diagnostic AI Under the EU AI Act
Diagnostic AI — systems that analyze patient data to identify diseases, conditions, or health risks — is the most high-profile category of healthcare AI and faces the most intensive regulatory scrutiny.
What the AI Act Requires
For providers (developers) of high-risk diagnostic AI, the EU AI Act requires:
Risk management (Article 9). A continuous, iterative risk management system throughout the AI system's lifecycle. For diagnostic AI, this means systematically identifying risks including diagnostic errors (false positives and false negatives), performance degradation over time, bias across patient demographics, and failure modes when encountering edge cases or unusual presentations.
Data governance (Article 10). Training, validation, and testing datasets must be relevant, sufficiently representative, and as free of errors as possible. For diagnostic AI, this has specific implications: training data must adequately represent the patient populations the system will serve, including diverse demographics, comorbidities, and clinical presentations. Historical biases in medical data — such as underrepresentation of certain ethnic groups in clinical datasets — must be identified and mitigated.
Technical documentation (Article 11 and Annex IV). Comprehensive documentation covering the system's design, development methodology, performance metrics, limitations, and intended purpose. For diagnostic AI, documentation must include detailed performance data broken down by relevant patient subgroups, not just aggregate accuracy figures.
Transparency (Article 13). Clear instructions for healthcare professional users, including the system's intended purpose, capabilities, known limitations, accuracy metrics, and the conditions under which performance may degrade. Healthcare professionals must understand what the AI can and cannot reliably do.
Human oversight (Article 14). The system must be designed so that healthcare professionals can effectively oversee its operation, understand its outputs, and override them when clinically appropriate. For diagnostic AI, this means the system should present its reasoning (or at least its confidence level) rather than just a binary diagnosis, and it must be clear that the AI output is a decision support tool, not a replacement for clinical judgment.
Accuracy, robustness, and cybersecurity (Article 15). The system must achieve appropriate levels of accuracy, be resilient to errors and inconsistencies in input data, and be protected against adversarial attacks or manipulation. For diagnostic AI, this includes robustness to variations in imaging equipment, patient positioning, data quality, and other real-world variables.
For Deployers (Healthcare Organizations Using Diagnostic AI)
Healthcare organizations that use diagnostic AI developed by others are deployers under Article 26. Your obligations include:
- Using the system according to the provider's instructions and intended purpose
- Assigning competent healthcare professionals to oversee the system
- Monitoring the system's performance in your specific clinical context
- Retaining system logs for at least six months
- Informing patients when they are subject to AI-assisted diagnostic decisions
- Reporting serious incidents to the provider and relevant authorities
- Conducting a fundamental rights impact assessment before deployment (Article 27)
Medical Imaging AI
AI-powered medical imaging — radiology, pathology, dermatology, ophthalmology — is one of the most advanced and widely deployed categories of healthcare AI. Systems that analyze chest X-rays, detect tumors in mammograms, identify diabetic retinopathy in retinal scans, or grade pathology slides are increasingly common in clinical practice.
Specific Considerations
Dual regulatory pathway. Most AI medical imaging systems qualify as medical devices under the MDR, meaning they face high-risk classification through both the Annex I pathway and potentially through Annex III. This does not double the obligations but does mean compliance must satisfy both frameworks simultaneously.
Performance across populations. Medical imaging AI systems trained predominantly on data from one population may perform differently on others. Skin lesion classifiers trained primarily on lighter skin tones have been shown to perform less accurately on darker skin tones. The AI Act's data governance requirements (Article 10) directly address this by requiring representative training data, but healthcare deployers should also independently validate performance on their specific patient population.
Integration with clinical workflows. How imaging AI is integrated into radiological or pathological workflows affects the human oversight requirement. A system that presents AI findings alongside the original images, clearly labeled as AI-generated, supports effective oversight. A system that only presents AI-processed images without easy access to the originals undermines it.
Versioning and updates. AI imaging systems are frequently updated with new model versions. Each significant update may require reassessment under both the AI Act and the MDR, as performance characteristics may change. Healthcare deployers should have processes to evaluate and validate updates before they go into clinical use.
Triage and Emergency Dispatch AI
AI systems used in patient triage or emergency dispatch occupy a particularly sensitive position under the EU AI Act.
Why Triage AI Is High-Risk
Triage AI is captured by Annex III, Category 5, which specifically includes AI systems intended "to evaluate and classify emergency calls, including the establishment of priorities for the dispatching of emergency first response services." This means even relatively simple AI triage tools — such as symptom checkers that assign urgency levels or call routing systems that prioritize emergency dispatches — are high-risk.
The stakes are clear: an AI system that incorrectly downgrades the priority of a heart attack call, or that fails to identify a time-critical presentation, can cause death. The EU AI Act reflects this by classifying all such systems as high-risk regardless of their complexity.
Practical Implications
- Human oversight is non-negotiable. AI triage systems must support, not replace, human clinical judgment. A triage AI should flag its assessment for a clinician to confirm or override, not automatically determine the care pathway.
- Fail-safe design. The system should be designed so that failures default to a safe state — for example, defaulting to a higher urgency level when uncertain rather than a lower one.
- Performance monitoring in real-time. Unlike AI systems that operate asynchronously, triage AI operates in time-critical environments. Monitoring must include real-time performance tracking and immediate alerting when the system's behavior deviates from expected parameters.
Drug Discovery and Clinical Research AI
AI is increasingly used in pharmaceutical research and drug development — from target identification and molecular modeling to clinical trial design and patient recruitment. The AI Act's treatment of these applications is more nuanced than clinical AI.
Risk Classification
Most drug discovery AI operates well before patient contact and does not directly affect individual health outcomes. An AI system that predicts molecular binding affinities or identifies potential drug candidates is generally minimal risk under the AI Act, as it does not fall under any Annex III category and is not a safety component of a medical device.
However, AI used in clinical trials may face higher scrutiny:
- AI that selects or stratifies patients for clinical trials may be considered high-risk if it materially influences who receives experimental treatments
- AI that monitors adverse events during trials and determines whether to halt or modify a trial could be a safety component under the MDR framework
- AI that generates evidence used in regulatory submissions (for marketing authorization of a drug or device) may face scrutiny for data integrity and reliability
Practical Guidance
For pharmaceutical and biotech companies, assess each AI application individually rather than assuming all research AI is minimal risk. The intended purpose and downstream impact on patient safety determine classification.
Insurance Underwriting and Health Insurance AI
AI in health insurance — risk assessment, premium pricing, claims processing, and fraud detection — is explicitly captured by the EU AI Act.
What Is Captured
Annex III, Category 5 specifically includes AI systems "intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance." This means:
- AI risk scoring models that assess an individual's health risk for insurance underwriting purposes are high-risk
- AI premium pricing systems that set or adjust premiums based on individual risk profiles are high-risk
- AI claims assessment tools that evaluate whether claims should be paid, denied, or investigated are likely high-risk if they materially influence the outcome
Key Obligations for Health Insurers
Non-discrimination. The AI Act's fairness requirements (Article 10 on data governance, Article 9 on risk management) reinforce existing non-discrimination obligations in insurance law. AI underwriting systems must not discriminate based on protected characteristics — directly or through proxies. This means testing for disparate impact across demographic groups, not just verifying that protected characteristics are not explicit input features.
Transparency to applicants. Under Article 26(11), deployers of high-risk AI systems must inform individuals that they are subject to AI-assisted decision-making. Insurance applicants must know that AI is used in underwriting and pricing decisions, and they must have the right to an explanation of how the AI influenced the outcome.
Fundamental rights impact assessment. Under Article 27, deployers of high-risk AI systems in insurance must conduct a fundamental rights impact assessment before deployment, evaluating the system's potential impact on equality, non-discrimination, and other fundamental rights.
Interaction with existing insurance regulation. The AI Act's requirements add to, not replace, existing insurance regulatory frameworks like Solvency II and the Insurance Distribution Directive. Health insurers using AI must satisfy both the AI Act and sector-specific requirements.
Dual Compliance: The AI Act and the Medical Device Regulation
For AI systems that qualify as medical devices, the interaction between the EU AI Act and the MDR (or IVDR for diagnostic devices) is the hardest compliance challenge in healthcare AI.
How the Two Frameworks Interact
Article 6(1) of the AI Act establishes that AI systems which are safety components of products requiring third-party conformity assessment under Annex I legislation (which includes the MDR and IVDR) are automatically high-risk. Article 8(2) then provides a critical integration mechanism: compliance with the AI Act's high-risk requirements is assessed as part of the existing conformity assessment process under the relevant product legislation.
In practical terms, this means:
- One conformity assessment process, not two. You do not undergo separate conformity assessments for the MDR and the AI Act. The AI Act requirements are integrated into the MDR conformity assessment.
- One notified body. The notified body that assesses your medical device under the MDR also assesses compliance with the AI Act's high-risk requirements.
- Harmonized standards apply. Where harmonized standards exist that cover both the MDR technical requirements and the AI Act requirements, demonstrating compliance with those standards creates a presumption of conformity.
Where the Frameworks Diverge
Despite this integration, there are areas where the AI Act adds requirements beyond the MDR:
Fundamental rights impact assessment (Article 27). The MDR does not require this. Healthcare deployers of high-risk AI medical devices must assess the impact on fundamental rights — including non-discrimination and data protection — before deployment.
AI literacy (Article 4). The MDR has no equivalent. All healthcare personnel who interact with AI medical devices need AI literacy training.
Transparency and human oversight emphasis. While the MDR requires instructions for use and user information, the AI Act's transparency (Article 13) and human oversight (Article 14) requirements are more prescriptive about how AI-specific information must be communicated and how human oversight must be designed.
Post-market monitoring scope. The MDR already requires post-market surveillance for medical devices. The AI Act's post-market monitoring requirements (Article 72) are largely compatible but may require additional monitoring specifically focused on AI-specific risks like data drift, performance degradation, and emerging biases.
The IVDR Dimension
For AI-based in vitro diagnostic devices — such as AI systems that analyze laboratory results, genetic data, or pathology samples — the same dual compliance framework applies through the IVDR (EU 2017/746). The IVDR's own conformity assessment requirements serve as the vehicle for AI Act compliance, following the same integration model as the MDR.
Building a Healthcare AI Compliance Program
Given the complexity of healthcare AI regulation, healthcare organizations need a structured approach. Here is a practical framework.
Step 1: Comprehensive AI Inventory
Catalog every AI system used in your healthcare organization. Be thorough — include not just clinical AI but also operational, administrative, and research AI. For each system, document:
- Its function and intended use
- Whether it qualifies as a medical device (or a component of one)
- What patient data it processes
- Who uses it and in what clinical context
- Whether it influences clinical decisions
Our assessment tool provides a structured framework for building this inventory.
Step 2: Dual Classification
For each AI system, determine:
- Whether it is a medical device under the MDR or IVDR (and its class)
- Whether it is high-risk under the EU AI Act (through Annex I, Annex III, or both)
- What your role is (provider or deployer) for each system
Many healthcare organizations will be deployers of third-party AI medical devices. If you have developed custom AI tools for clinical use, you may also be a provider. Our classification tool can guide you through this analysis.
Step 3: Gap Analysis Against Both Frameworks
For each high-risk AI system, assess your current compliance against both the MDR/IVDR requirements and the additional AI Act requirements. Common gaps include:
- No formal fundamental rights impact assessment (AI Act Article 27)
- Insufficient AI-specific transparency in instructions for use (AI Act Article 13)
- No structured AI literacy training for clinical staff (AI Act Article 4)
- Human oversight procedures that do not meet the AI Act's specific requirements (AI Act Article 14)
- Post-market monitoring that does not cover AI-specific risks like data drift and bias
Step 4: Implement AI Act-Specific Measures
Close the gaps identified in Step 3 by implementing:
- AI literacy training for all healthcare staff who interact with AI systems — clinical and non-clinical. This is already mandatory and overdue.
- Fundamental rights impact assessments for high-risk AI deployments, particularly those affecting patient access to care, insurance, or treatment decisions.
- Enhanced human oversight procedures that specifically address AI system limitations, including when and how clinicians should override AI recommendations.
- AI-specific post-market monitoring covering performance degradation, bias drift, and adverse events attributable to AI decision-support.
- Patient information processes ensuring patients are informed when AI contributes to their care decisions.
Step 5: Ongoing Governance
Establish a healthcare AI governance structure that:
- Reviews all AI-related incidents and near-misses
- Validates AI system performance at regular intervals against your specific patient population
- Evaluates and approves AI system updates before clinical deployment
- Monitors the regulatory landscape for new guidance, harmonized standards, and enforcement precedents
- Maintains audit readiness across both the MDR/IVDR and AI Act requirements
For a broader framework on AI governance, see our guide on building an AI governance framework.
Special Considerations for SMB Healthcare Providers
Small and medium-sized healthcare organizations face the same regulatory requirements as large hospitals and health systems but with fewer resources to meet them. Some practical considerations:
Leverage your providers. If you are a deployer of third-party AI medical devices, your provider has the primary compliance burden for the device itself. Engage with them — request their AI Act compliance documentation, ask about bias testing results, and understand their post-market monitoring program. Your deployer obligations are significant but less intensive than provider obligations.
Start with Article 4. AI literacy training is the most immediate, lowest-barrier compliance step. It applies to every healthcare organization and is already overdue. Completing it demonstrates compliance momentum and reduces the risk of penalty for the most universally applicable obligation.
Focus on clinical AI first. If you use AI systems that directly influence patient care (diagnostic AI, triage AI, treatment recommendation AI), prioritize those for compliance work. AI used for administrative or operational purposes has lower risk and can be addressed later.
Document clinical judgment override. When clinicians override AI recommendations, document the override and the clinical reasoning. This creates an audit trail demonstrating that human oversight is not just theoretical but operational.
Join sector-specific groups. National healthcare AI networks and professional bodies are developing sector-specific guidance for AI Act compliance. These resources can provide practical, healthcare-focused interpretation of the regulation's requirements.
The Path Forward
Healthcare AI regulation under the EU AI Act is complex, but it is not unmanageable. The AI Act adds AI-specific requirements on top of existing medical device and healthcare frameworks — it does not create an entirely new regulatory universe.
Healthcare organizations with strong medical device compliance programs are well-positioned. The existing MDR/IVDR conformity assessment infrastructure handles much of the AI Act compliance for medical device AI. The additional requirements — AI literacy, fundamental rights impact assessments, enhanced transparency and human oversight — are incremental.
For those just beginning, start now. Visit our healthcare compliance page for sector-specific resources, or run our free compliance assessment for an instant baseline. For deeper guidance on high-risk classification, see our complete guide.
Take the first step. Run our free assessment to understand how the EU AI Act applies to your healthcare organization and get a prioritized action plan tailored to your specific AI systems and compliance needs.