High-Risk AI Classification: The Complete Guide for EU Businesses
Understand how the EU AI Act classifies high-risk AI systems. Learn the 8 categories in Annex III, see SMB-relevant examples, and know your obligations.
Why Classification Matters More Than Anything Else
The entire EU AI Act revolves around one question: what risk level does your AI system pose? The answer determines everything — your legal obligations, documentation requirements, oversight mechanisms, and potential penalties.
Get the classification right and you have a clear roadmap. Get it wrong and you are either over-investing in compliance for a minimal-risk tool or, far worse, under-complying with a high-risk system that could trigger fines of up to EUR 15 million or 3% of global turnover.
This guide walks through the EU AI Act's risk classification system with a focus on high-risk AI — the category that creates the most obligations and the most confusion for small and medium-sized businesses.
The Four Risk Tiers
The EU AI Act organizes all AI systems into four tiers. Understanding the full pyramid helps you see where high-risk fits.
Unacceptable Risk (Prohibited)
These AI uses are banned outright under Article 5. They include social scoring, manipulative AI, untargeted biometric scraping, and emotion recognition in workplaces and schools. If you use any of these, no amount of compliance work fixes the problem — you must stop immediately. Penalties reach EUR 35 million or 7% of turnover.
High Risk
These AI systems are legal but heavily regulated. They must meet extensive requirements for documentation, human oversight, accuracy, and risk management. This is the category this guide focuses on, and it is where most SMB compliance work concentrates.
Limited Risk
These AI systems have lighter obligations, primarily around transparency. If your AI system interacts with people (chatbots), generates synthetic content (deepfakes, AI-generated text or images), or performs emotion recognition or biometric categorization, you must disclose that AI is being used. Article 50 governs these requirements.
Minimal Risk
Most AI tools fall here — spam filters, AI-powered search, recommendation engines, grammar checkers. There are no specific legal obligations beyond general product safety laws, though the AI literacy requirement under Article 4 still applies to anyone using them.
What Makes an AI System "High-Risk"?
The EU AI Act defines high-risk AI through two pathways, set out in Article 6.
Pathway 1: AI as a Safety Component (Article 6(1) and Annex I)
An AI system is automatically high-risk if it is:
- A safety component of a product covered by existing EU harmonization legislation listed in Annex I (such as machinery, medical devices, toys, lifts, marine equipment, civil aviation, motor vehicles, or radio equipment), AND
- That product is required to undergo a third-party conformity assessment under the relevant legislation.
For most SMBs, this pathway is less relevant unless you manufacture physical products with embedded AI safety features. However, if you build medical devices with AI diagnostic components or machinery with AI-driven safety systems, this pathway applies to you.
Pathway 2: Standalone High-Risk AI (Article 6(2) and Annex III)
This is the pathway that affects the majority of businesses. An AI system is high-risk if it falls into one of the eight categories listed in Annex III. These categories are defined by the domain in which the AI system operates and the potential impact on fundamental rights.
The Eight High-Risk Categories in Annex III
Here is every category, what it covers, and what it means in practice for small and medium-sized businesses.
Category 1: Biometric Identification and Categorisation of Natural Persons
What it covers: AI systems intended to be used for remote biometric identification (not including verification), biometric categorisation based on sensitive attributes (race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation), and emotion recognition.
SMB examples:
- A security company using facial recognition to identify individuals in a crowd
- A retail business using AI to categorise customers by ethnicity or estimated age group
- An employer using AI to detect employee emotions during meetings
Important note: Many biometric uses are outright prohibited under Article 5. The ones that land in the high-risk category rather than the prohibited category are those with specific legal bases or limited, authorized use cases. If you are considering any biometric AI, seek specialist advice before deploying.
Typical SMB impact: Low. Most small businesses do not deploy biometric identification systems. However, if you use any access control or surveillance technology marketed as having "AI-powered" recognition features, check whether it falls here.
Category 2: Management and Operation of Critical Infrastructure
What it covers: AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity.
SMB examples:
- An energy company using AI to manage electricity grid distribution
- A water utility using AI for supply management and leak detection
- A logistics firm using AI to manage traffic routing for critical supply chains
Typical SMB impact: Low to moderate. Unless your business operates in utilities, energy, or critical infrastructure management, this category is unlikely to apply. However, if you provide AI-powered software to critical infrastructure operators, you may be a provider of a high-risk system.
Category 3: Education and Vocational Training
What it covers: AI systems intended to determine access to or admission to educational and vocational training institutions, to evaluate learning outcomes, to assess the appropriate level of education for an individual, or to monitor and detect prohibited behaviour of students during tests.
SMB examples:
- A training company using AI to assess employee competency levels
- An EdTech startup using AI to grade assignments or recommend courses
- A vocational school using AI to decide student admissions
- A testing platform using AI-powered proctoring during examinations
Typical SMB impact: Moderate for businesses in education or professional training. If your company provides corporate training with AI-driven assessments, or if you use AI to evaluate employee skills and recommend training paths, this category may apply.
Category 4: Employment, Workers Management, and Access to Self-Employment
What it covers: AI systems used in recruitment and selection (screening CVs, ranking candidates), making decisions affecting terms of work relationships (promotion, termination, task allocation), and monitoring or evaluating worker performance and behaviour.
SMB examples:
- Using an AI-powered recruitment platform like HireVue or Pymetrics to screen job applicants
- An HR tool that uses AI to rank candidates based on CV analysis
- AI systems that monitor employee productivity or predict turnover
- Tools that use AI to recommend promotions, bonuses, or performance ratings
- AI-driven scheduling systems that allocate tasks based on predicted performance
Typical SMB impact: High. This is one of the most relevant categories for SMBs. If you use any AI-powered HR or recruitment tool, there is a strong chance it falls into this category. Even using a general-purpose AI model to help screen CVs could be considered high-risk if the AI output materially influences hiring decisions.
Category 5: Access to and Enjoyment of Essential Private Services and Public Services and Benefits
What it covers: AI systems used to evaluate creditworthiness or credit scoring, to assess eligibility for public assistance benefits and services, for risk assessment and pricing in life and health insurance, and to evaluate and classify emergency calls or prioritise dispatching of emergency services.
SMB examples:
- A financial services firm using AI for credit scoring or loan approvals
- An insurance company using AI to assess risk profiles and set premiums
- A fintech startup using AI to evaluate creditworthiness for buy-now-pay-later services
- A benefits administration platform using AI to determine eligibility
Typical SMB impact: High for financial services and insurance businesses. If your business offers credit, lending, insurance, or benefits administration and uses AI in any part of the assessment process, this category very likely applies. Even using a third-party AI-powered credit scoring API makes you a deployer of a high-risk system.
Category 6: Law Enforcement
What it covers: AI systems used for risk assessments of natural persons (predicting criminal behaviour), polygraphs and similar tools, evaluation of evidence reliability, profiling during criminal investigations, and crime analytics.
Typical SMB impact: Very low. This category primarily affects law enforcement agencies and their technology suppliers. Unless your business sells AI-powered tools to police or security agencies, it is unlikely to apply.
Category 7: Migration, Asylum, and Border Control Management
What it covers: AI systems used as polygraphs or similar during immigration processing, for assessing migration risks, for examining visa and residence permit applications, and for identification of persons in the context of migration.
Typical SMB impact: Very low. Like Category 6, this primarily affects government agencies and their specialist technology suppliers.
Category 8: Administration of Justice and Democratic Processes
What it covers: AI systems used to assist judicial authorities in researching and interpreting facts and the law, and AI systems intended to influence the outcome of elections or voting behaviour (excluding tools that do not directly interact with voters, such as campaign logistics).
Typical SMB impact: Very low for most businesses. Relevant mainly for legal technology companies providing AI research tools to courts, or for organizations involved in election technology.
How to Classify Your AI Systems
Follow this structured process to classify every AI system in your organization.
Step 1: Build Your AI Inventory
List every AI system your organization uses or provides. Be thorough. Include:
- Named AI products (ChatGPT, Microsoft Copilot, Salesforce Einstein)
- AI features embedded in existing software (AI-powered search in your CRM, AI recommendations in your analytics tool)
- Custom AI models or scripts your team has built
- AI APIs you integrate into your products or workflows
- AI tools employees may have adopted without formal approval
Step 2: Check Against Prohibited Uses
Before classifying risk, verify that none of your AI uses are outright prohibited under Article 5. If any are, stop using them immediately.
Step 3: Check Annex I (Safety Components)
If any AI system is a safety component of a product covered by the legislation listed in Annex I, and that product requires third-party conformity assessment, the AI system is high-risk under Article 6(1).
Step 4: Check Annex III (Domain-Based Categories)
For each remaining AI system, check whether its intended purpose falls into any of the eight Annex III categories described above. Pay special attention to Categories 4 (Employment) and 5 (Essential Services), as these are the most common high-risk triggers for SMBs.
Step 5: Apply the Exception in Article 6(3)
Even if an AI system falls within an Annex III category, it is NOT considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This exception applies when the AI system:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing or influencing human assessment
- Performs a preparatory task for an assessment that is relevant to the use cases listed in Annex III
However, an AI system that performs profiling of natural persons is always considered high-risk, regardless of this exception. Use this exception carefully and document your reasoning thoroughly.
Step 6: Document Your Classification
For every AI system, record:
- The system name and purpose
- Your classification decision (minimal, limited, high, or prohibited)
- The reasoning behind the classification, including which Annex III category applies (if any)
- Whether the Article 6(3) exception was considered and why it does or does not apply
- The date of classification and who made the decision
This documentation is not optional. It is your primary defence in the event of a regulatory inquiry.
Obligations for High-Risk AI Systems
Once you have identified a high-risk system, here is what the EU AI Act requires. The requirements differ depending on whether you are a provider (developing or supplying the AI) or a deployer (using it in your business).
For Providers
Providers of high-risk AI systems must comply with Articles 8 through 15:
- Risk management system (Article 9) — Continuous, iterative risk identification and mitigation throughout the AI system lifecycle
- Data governance (Article 10) — Training, validation, and testing data must meet quality criteria, be relevant, representative, and free from errors
- Technical documentation (Article 11) — Comprehensive documentation demonstrating compliance, prepared before the system is placed on the market
- Record-keeping (Article 12) — Automatic logging of events to ensure traceability
- Transparency (Article 13) — Clear instructions for deployers, including system capabilities, limitations, and intended purpose
- Human oversight (Article 14) — Design the system so humans can effectively oversee its operation
- Accuracy, robustness, cybersecurity (Article 15) — Appropriate levels of accuracy, resilience to errors, and protection against attacks
Providers must also conduct conformity assessments (Article 43), register the system in the EU database (Article 49), and establish post-market monitoring systems (Article 72).
For Deployers
Deployers of high-risk AI systems — which includes any SMB using a high-risk AI tool — must comply with Article 26:
- Use the system according to the provider's instructions — Do not repurpose a high-risk system beyond its intended use
- Ensure human oversight — Assign competent individuals to oversee the AI system's operation
- Monitor the system's operation — Watch for anomalies, errors, and unexpected outputs
- Keep logs — Maintain logs generated by the system for at least six months (or as required by sector-specific legislation)
- Inform affected persons — Notify people when they are subject to decisions made by high-risk AI systems
- Conduct a fundamental rights impact assessment — Before deploying certain high-risk systems, assess their impact on fundamental rights (Article 27)
- Cooperate with authorities — Provide information and access when requested by market surveillance authorities
Shared Obligations
Both providers and deployers must ensure AI literacy (Article 4), apply quality management systems where relevant, and report serious incidents to authorities.
Common Classification Mistakes
Assuming Off-the-Shelf Tools Are Not Your Problem
If you deploy a third-party high-risk AI system, you are a deployer with legal obligations. The provider's compliance does not absolve you. You must still ensure human oversight, maintain logs, inform affected persons, and conduct impact assessments.
Overlooking AI Features in Existing Software
Many business tools have added AI features — your CRM might now use AI to score leads, your email platform might use AI to filter applicants, your finance software might use AI for cash flow predictions. These embedded AI features can trigger high-risk classification even if the primary product is not marketed as an "AI tool."
Classifying Based on Marketing, Not Function
Vendors may describe their product as "AI-powered analytics" when in practice it makes recommendations that influence employment decisions. Classification depends on what the system actually does and how you use it, not how the vendor describes it.
Ignoring the Article 6(3) Exception
Some businesses classify every AI system in an Annex III domain as high-risk without considering the exception. If an AI system performs a narrow preparatory task that a human then reviews independently, the exception may apply and save significant compliance effort.
Over-Relying on the Article 6(3) Exception
The opposite mistake. Some businesses stretch the exception to avoid classifying clearly high-risk systems. If a regulator disagrees with your reasoning, you face non-compliance penalties. When in doubt, classify as high-risk — the cost of over-compliance is far less than the cost of a fine.
How AktAI Automates Classification
Manually classifying every AI system against Annex I, Annex III, and the Article 6(3) exception is time-consuming and error-prone. AktAI streamlines the entire process:
- Guided AI inventory — Step-by-step system registration that captures every detail needed for accurate classification. Quick-add chips for popular tools like ChatGPT, Microsoft Copilot, and Salesforce Einstein mean you can build your inventory in minutes.
- Automatic risk classification — AktAI's engine analyses each system's purpose, domain, and usage context against all Annex III categories and produces a preliminary classification with detailed reasoning.
- Article 6(3) exception analysis — The platform evaluates whether the narrow-task exception applies and documents the reasoning either way.
- Obligation mapping — Once classified, AktAI tells you exactly what obligations apply, with checklists tailored to whether you are a provider or a deployer.
- Gap analysis — Instantly see which high-risk requirements you have met and which still need attention, with prioritised action plans.
- Audit-ready documentation — Every classification decision, its reasoning, and supporting evidence are stored and exportable for regulatory review.
The August 2, 2026 deadline for full high-risk enforcement is approaching. Classification is the foundation that every other compliance activity builds on. Without accurate classification, you cannot know what documentation you need, what oversight to implement, or what risks to manage.
Find out how your AI systems classify. Use our free compliance scanner to get an instant risk classification for every AI tool your business uses — no signup required.