The Complete AI Compliance Checklist: From Inventory to Monitoring
A comprehensive 5-phase AI compliance checklist for the EU AI Act. Covers discovery, classification, documentation, training, and monitoring with specific action items for each phase.
Why You Need a Structured Approach
The EU AI Act is a complex regulation. It spans 113 articles, 13 annexes, and creates different obligations depending on whether you are a provider, deployer, importer, or distributor of AI systems — and whether those systems are classified as minimal, limited, high, or unacceptable risk.
Trying to comply without a structured approach is like trying to renovate a house without blueprints. You might get some things right by instinct, but you will miss critical steps, duplicate effort, and end up with something that does not hold together.
This checklist breaks EU AI Act compliance into five sequential phases. Each phase builds on the previous one. Skip a phase and the subsequent ones fall apart. Follow them in order and you end up with comprehensive, audit-ready compliance.
This is not theory — it is a practical action plan with specific tasks you can assign, track, and complete. Use it alongside our interactive compliance checklist to track your progress digitally.
Phase 1: Discovery — Know What AI You Actually Use
You cannot comply with AI regulations if you do not know what AI your organization uses. This sounds obvious, but it is where most businesses fail. Shadow AI — tools adopted by employees without formal approval — means your real AI footprint is almost certainly larger than you think.
Action Items
1.1 Conduct a formal AI inventory.
Survey every department, team, and function in your organization. Ask: what software tools do you use that involve AI, machine learning, or automated decision-making? Cast a wide net — include tools that people might not think of as "AI."
Common AI tools that businesses overlook:
- Email spam filters and priority inbox features
- Grammar and writing assistants (Grammarly, Microsoft Editor)
- Translation tools (DeepL, Google Translate)
- Predictive text and autocomplete features
- Analytics dashboards with "smart insights" or AI-generated recommendations
- CRM systems with lead scoring or churn prediction
- Customer service chatbots and automated response systems
- AI-powered search within internal tools
- Code completion tools (GitHub Copilot, Cursor)
1.2 Document each AI system's purpose and usage context.
For every system you identify, record:
- The system name and provider
- What it is used for in your organization
- Who uses it (which roles or departments)
- What data it processes (especially personal data)
- Whether it influences decisions that affect individuals
- Whether it was formally procured or adopted informally
- The provider's stated intended purpose
1.3 Identify your role for each system.
Under the EU AI Act, your obligations depend on your role:
- Provider — You developed or had the AI system developed and place it on the market or put it into service under your own name or trademark
- Deployer — You use the AI system in your professional capacity (this is most SMBs for most tools)
- Importer — You bring an AI system from outside the EU into the EU market
- Distributor — You make an AI system available on the market without being a provider or importer
Most SMBs are deployers of third-party AI tools. Some also develop AI features or products, making them providers as well. Your role can differ from system to system.
1.4 Check for prohibited uses.
Before going further, verify that none of your AI uses fall under Article 5's prohibited practices. If any do, stop using them immediately. Prohibited uses include social scoring, manipulative AI that exploits vulnerabilities, untargeted facial recognition scraping, and emotion recognition in workplaces and schools (with limited exceptions). Penalties for prohibited practices reach EUR 35 million or 7% of global turnover.
1.5 Capture the inventory in a central register.
Use a structured format — our AI systems registry provides a purpose-built tool for this — or at minimum a spreadsheet with consistent fields. This register is a living document that must be maintained as tools are adopted, changed, or retired.
Your discovery phase is complete when you can answer: "Every AI system in our organization is documented, with its purpose, users, data flows, and our role clearly recorded." Our assessment tool can help validate that your inventory is comprehensive.
Phase 2: Classification — Determine Your Risk Level
With your inventory complete, the next phase is classifying every AI system against the EU AI Act's four risk tiers. Classification determines your legal obligations, so getting it right is critical.
Action Items
2.1 Apply the risk classification framework.
For each AI system in your inventory, work through the classification hierarchy:
- Is it prohibited? (Already checked in Phase 1, but verify again with fresh eyes)
- Is it high-risk under Annex I? (AI as a safety component of a product requiring third-party conformity assessment)
- Is it high-risk under Annex III? (AI used in one of the eight regulated domains: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice)
- Does the Article 6(3) exception apply? (Even if an AI system falls under Annex III, it may not be high-risk if it performs only a narrow procedural task, improves a previously completed human activity, detects patterns without influencing human assessment, or performs a preparatory task)
- Is it limited risk? (AI systems with transparency obligations under Article 50 — chatbots, deepfake generators, emotion recognition, biometric categorization)
- Is it minimal risk? (Everything else — no specific obligations beyond AI literacy)
For detailed guidance on high-risk classification, see our complete classification guide.
2.2 Pay special attention to Category 4 and Category 5 in Annex III.
For SMBs, the most commonly triggered high-risk categories are:
- Category 4: Employment — AI used in recruitment, screening, hiring decisions, performance evaluation, promotion, termination, or task allocation
- Category 5: Essential services — AI used in credit scoring, insurance risk assessment, emergency service dispatch, or public benefits eligibility
If you use any AI-powered HR, recruitment, lending, or insurance tools, there is a strong likelihood that one or more of your systems is high-risk.
2.3 Document your classification reasoning.
For every AI system, record:
- The classification decision (prohibited, high-risk, limited, or minimal)
- The legal basis for the decision (which article, annex, and category applies)
- Whether the Article 6(3) exception was considered and the reasoning
- The date of classification and who made the decision
- Any assumptions made and information relied upon
This documentation is your primary defense in a regulatory inquiry. A well-reasoned classification that a regulator ultimately disagrees with demonstrates good faith. No documentation at all suggests negligence.
2.4 Prioritize your compliance effort.
With classification complete, you now know where to focus:
- High-risk systems require the most compliance work (Articles 8-15 for providers, Article 26 for deployers)
- Limited-risk systems need transparency measures (Article 50)
- Minimal-risk systems need only AI literacy coverage (Article 4)
If you have no high-risk systems, your compliance burden is manageable. If you have several, plan accordingly — each one needs its own set of documentation, oversight procedures, and monitoring.
Phase 3: Documentation — Build Your Compliance Evidence
Documentation is the tangible proof of your compliance. Without it, your governance framework, risk assessments, and policies exist only in memory — which is worth nothing in a regulatory inquiry.
Action Items
3.1 Create technical documentation for high-risk systems (providers).
If you are a provider of a high-risk AI system, Article 11 and Annex IV require comprehensive technical documentation including:
- A general description of the system and its intended purpose
- The design specifications and architecture
- Development methodology and techniques
- Data governance and management practices (training, validation, and testing data)
- Performance metrics including accuracy, robustness, and cybersecurity measures
- The risk management system and its outcomes
- Human oversight measures
- Information provided to deployers (instructions for use)
- The conformity assessment procedure followed
This is substantial documentation. Start early and build iteratively rather than trying to create everything at once.
3.2 Maintain deployer documentation.
If you are a deployer of a high-risk AI system, your documentation requirements under Article 26 include:
- Records showing you use the system in accordance with the provider's instructions
- Human oversight assignments and qualifications of assigned individuals
- Logs generated by the system (retained for at least six months, or longer if required by sector-specific law)
- Your fundamental rights impact assessment (required under Article 27 for certain deployers of high-risk systems)
- Records of any serious incidents and how they were reported
- Evidence of monitoring activities and their findings
3.3 Document your transparency measures for limited-risk systems.
For AI systems with transparency obligations under Article 50:
- If you deploy a chatbot or AI that interacts with people: document how you disclose that the person is interacting with AI
- If you generate deepfakes or synthetic content: document how you mark or label that content
- If you use emotion recognition or biometric categorization: document how you inform the subjects
3.4 Maintain training records.
For Article 4 AI literacy compliance, keep records of:
- What training was provided and its content
- Who completed it and when
- How the training was tailored to different roles (proportionality requirement)
- When the training was last updated
3.5 Create and maintain compliance policies.
Document your organizational policies, at minimum:
- AI Acceptable Use Policy
- AI Procurement and Adoption Policy
- AI Incident Response Procedure
- Data Handling Policy for AI Systems
- AI Monitoring and Review Procedure
3.6 Establish a document management system.
All compliance documentation should be:
- Stored in a central, accessible location
- Version-controlled (so you can show the document history)
- Reviewed and updated on a defined schedule
- Backed up securely
- Exportable for regulatory requests
Our documents module provides a purpose-built compliance documentation system that handles versioning, storage, and export automatically. For general best practices, see our guide on compliance documentation.
Phase 4: Training — Build Organizational Competence
Documentation and policies are only effective if the people in your organization understand and follow them. Phase 4 ensures that everyone — from leadership to frontline employees — has the knowledge they need to fulfill their role in your compliance framework.
Action Items
4.1 Deliver Article 4 AI literacy training to all staff.
This is already mandatory (since February 2, 2025). If you have not done it, this is your highest immediate priority. All staff who interact with AI systems in any capacity need training that is proportionate to their role.
The training must cover at minimum:
- What AI is and how it works at a conceptual level
- The EU AI Act and its relevance to your organization
- Your organization's AI policies and acceptable use rules
- How to identify and report AI-related incidents or concerns
- The limitations of AI — that AI can be inaccurate, biased, and manipulated
Our AI literacy training module provides structured, Article 4-compliant training with role-appropriate content and completion tracking.
4.2 Provide enhanced training for human oversight personnel.
Individuals assigned to oversee high-risk AI systems (as required by Article 14) need deeper training:
- How the specific AI system works, including its capabilities and limitations
- How to interpret the system's outputs
- When and how to override or disregard the system's outputs
- The monitoring procedures they are responsible for
- How to escalate issues and report incidents
- The regulatory requirements specific to their oversight role
4.3 Train leadership on governance responsibilities.
Your AI governance lead and leadership team should understand:
- The EU AI Act's risk classification system and how it applies to your organization
- The penalty structure and enforcement mechanisms
- The organization's compliance status and key gaps
- Their decision-making responsibilities regarding AI adoption, risk acceptance, and resource allocation
- How to respond to regulatory inquiries
4.4 Provide role-specific training for AI system users.
Each team or department that uses AI tools should receive training specific to their tools and context:
- What data can and cannot be input into the AI system
- How to critically evaluate AI outputs before acting on them
- What constitutes appropriate and inappropriate use
- How to identify when the AI system is producing unreliable or potentially biased outputs
- The human oversight and monitoring procedures for their specific tools
4.5 Establish ongoing training requirements.
AI literacy is not a one-time event. Define:
- Frequency of refresher training (annually at minimum)
- Triggers for additional training (new AI tool adoption, significant regulatory updates, role changes)
- How new employees receive AI literacy training during onboarding
- How training content is updated to reflect changes in your AI landscape and regulatory obligations
Phase 5: Monitoring — Maintain Compliance Over Time
Compliance is not a destination — it is a continuous state. Phase 5 establishes the monitoring and review processes that keep your compliance framework current and effective as your AI usage, the regulatory landscape, and the technology itself evolve.
Action Items
5.1 Implement operational monitoring for high-risk AI systems.
For each high-risk system, define and execute monitoring procedures:
- Performance monitoring — Track accuracy, error rates, and output quality over time. Establish baselines and alert thresholds.
- Bias monitoring — For systems that affect individuals (hiring, credit, insurance), periodically assess whether outputs show demographic bias.
- Usage monitoring — Verify that the system is being used within its intended purpose and that usage patterns have not drifted.
- Incident tracking — Log every AI-related incident, from minor output errors to significant failures. Analyze trends.
- Log review — Periodically review the automatic logs generated by high-risk AI systems (as required by Article 12).
5.2 Monitor your AI inventory for changes.
Your AI landscape is not static. New tools are adopted, existing tools are updated or replaced, and employees may start using AI in new ways. Establish processes to:
- Review the AI inventory at least quarterly
- Require approval before new AI tools are adopted (referencing your AI Procurement Policy)
- Monitor for shadow AI — tools adopted without formal approval
- Reassess classifications when AI tools are significantly updated by their providers
5.3 Track regulatory developments.
The EU AI Act regulatory landscape is still evolving. Harmonized standards are being developed, guidance is being issued, and enforcement precedents are being set. Stay current on:
- New guidance from the European AI Office and your national competent authority
- Harmonized standards published by CEN and CENELEC (the European standardization bodies)
- Enforcement decisions and penalties in your sector or jurisdiction
- Updates to codes of practice for general-purpose AI models
5.4 Conduct periodic compliance reviews.
Schedule formal compliance reviews:
- Monthly — Quick status check: are monitoring procedures being followed, are there any open incidents, have any new AI tools been adopted?
- Quarterly — Deeper review: update risk assessments for high-risk systems, review training records, verify documentation is current
- Annually — Comprehensive review: reassess the entire governance framework, update policies, review classification decisions, conduct a gap analysis
5.5 Maintain audit readiness.
At any time, you should be able to produce:
- Your complete AI systems inventory with classification decisions
- Technical documentation and deployer documentation for all high-risk systems
- Training records showing Article 4 compliance
- Monitoring logs and review records
- Incident reports and resolution records
- Your governance framework documentation (policies, procedures, organizational structure)
If you cannot produce these within a reasonable timeframe when requested, you have a compliance gap. Use our gap analysis tool to identify exactly where your documentation falls short.
Mapping the Checklist to EU AI Act Articles
For reference, here is how each phase maps to specific EU AI Act requirements:
Phase 1: Discovery
- Article 26(1) — Deployers must use AI systems in accordance with instructions of use, which requires knowing what systems you use
- Article 49 — High-risk AI systems must be registered in the EU database, which requires an inventory
Phase 2: Classification
- Article 6 — Risk classification criteria
- Annex I — Products requiring third-party conformity assessment
- Annex III — Eight high-risk AI categories
Phase 3: Documentation
- Article 11 + Annex IV — Technical documentation for high-risk AI (providers)
- Article 12 — Record-keeping and logging (providers)
- Article 13 — Transparency and provision of information to deployers (providers)
- Article 26(6) — Log retention for deployers
- Article 27 — Fundamental rights impact assessment (deployers)
- Article 17 — Quality management systems (providers)
Phase 4: Training
- Article 4 — AI literacy obligation (all organizations)
- Article 14 — Human oversight (requires competent oversight personnel)
- Article 26(2) — Deployer obligation to assign oversight to competent individuals
Phase 5: Monitoring
- Article 9 — Risk management as a continuous, iterative process (providers)
- Article 26(5) — Deployer obligation to monitor AI system operation
- Article 72 — Post-market monitoring (providers)
- Article 73 — Serious incident reporting (providers and deployers)
Common Mistakes and How to Avoid Them
Mistake 1: Starting with Documentation Instead of Discovery
Some businesses jump straight to writing policies and documentation without first understanding what AI they actually use. This produces impressive-looking documents that do not reflect reality. Always start with the inventory.
Mistake 2: Classifying Everything as Minimal Risk
Wishful classification is dangerous. If a regulator determines that your "minimal risk" AI recruitment tool is actually high-risk, you face penalties for non-compliance with every high-risk requirement you skipped. When in doubt, classify conservatively.
Mistake 3: Treating Compliance as a One-Time Project
Building the framework is a project. Maintaining it is an ongoing process. The most common failure mode is completing the initial compliance work and then letting it atrophy as tools change, employees turn over, and documentation grows stale.
Mistake 4: Ignoring Third-Party AI Tools
If you deploy a third-party AI tool classified as high-risk, you are a deployer with legal obligations under Article 26. Your provider's compliance does not replace your own. You still need human oversight, monitoring, log retention, and — for certain use cases — a fundamental rights impact assessment.
Mistake 5: Over-Engineering for Small Organizations
A five-person company does not need the same governance infrastructure as a 500-person company. The EU AI Act requires measures that are "proportionate." A concise set of policies, a simple inventory, basic training records, and documented monitoring procedures can satisfy the requirements for a small organization. Do not let perfect be the enemy of compliant.
Using the Checklist with AktAI
Each phase of this checklist maps directly to an AktAI feature:
- Phase 1: Discovery — Our AI systems inventory provides guided, structured system registration with quick-add templates for common tools. It captures every field needed for accurate classification and compliance documentation.
- Phase 2: Classification — Automatic risk classification against all Annex III categories with Article 6(3) exception analysis and detailed reasoning.
- Phase 3: Documentation — Audit-ready compliance documentation generated and stored with version control and export capability.
- Phase 4: Training — Article 4-compliant AI literacy training with role-appropriate content and completion tracking.
- Phase 5: Monitoring — Continuous compliance status tracking with gap analysis and alerts when your status changes.
You can track your progress across all five phases using our interactive compliance checklist, which provides a centralized view of your compliance journey with task-level tracking and deadline management.
For a detailed guide on building your AI systems inventory — the foundation that the entire checklist builds upon — see our AI systems inventory guide.
Start your compliance journey today. Run our free compliance assessment to see where you stand across all five phases and get a prioritized action plan tailored to your business.