EU AI Act Documentation: What You Need and How to Create It
A practical guide to EU AI Act compliance documentation. Learn what documents each risk level requires, what they must contain, and how to avoid common mistakes.
Documentation Is Not Bureaucracy — It Is Your Shield
If there is one lesson from GDPR enforcement, it is this: regulators do not care what measures you took if you cannot prove them. The EU AI Act takes this principle even further. Documentation is not an afterthought — it is a core legal requirement that determines whether your organization is compliant or exposed.
Article 11 mandates technical documentation for high-risk AI systems. Article 12 requires automatic logging and record-keeping. Article 18 demands that deployers maintain documentation for at least ten years. And even for minimal-risk AI tools, Article 4's AI literacy requirement needs documented evidence that training was delivered.
This guide covers exactly what documents you need, what each must contain, how to organize them, and the mistakes that trip up most businesses.
Documentation Requirements by Risk Level
Your documentation obligations scale with risk. Here is what each tier demands.
Minimal-Risk AI Systems
Even the simplest AI tools require some documentation under the EU AI Act:
- AI system inventory — A record of what AI tools your organization uses, maintained and updated regularly.
- AI literacy records — Evidence that staff using these systems have received appropriate AI literacy training (Article 4).
- Basic usage policies — Internal guidelines on acceptable use of AI tools.
While there are no specific technical documentation requirements for minimal-risk systems, maintaining these basic records demonstrates good governance and provides a foundation if a tool is later reclassified.
Limited-Risk AI Systems
In addition to the minimal-risk documents above, limited-risk systems require:
- Transparency disclosures — Documentation showing that you inform users when they interact with AI systems, as required by Article 50. This includes chatbot disclosures, synthetic content labels, and emotion recognition notifications.
- Disclosure implementation records — Evidence of how and where transparency notices are displayed.
High-Risk AI Systems
High-risk AI triggers the full documentation regime. Both providers and deployers have obligations, though the scope differs.
Providers must create and maintain (Article 11 and Annex IV):
- Technical documentation (detailed below)
- Quality management system documentation
- Conformity assessment records
- EU declaration of conformity
- Post-market monitoring plan
- Serious incident reports
Deployers must create and maintain (Article 26):
- Records of AI system deployment and intended use
- Human oversight procedures and assignments
- Monitoring logs and anomaly records
- Fundamental rights impact assessment (for certain high-risk systems, per Article 27)
- Incident reports and corrective actions
- Data protection impact assessment (if personal data is processed, overlapping with GDPR)
What Technical Documentation Must Contain
Annex IV of the EU AI Act specifies exactly what technical documentation must include for high-risk AI systems. This is the most detailed documentation requirement and applies primarily to providers. However, deployers need to understand it because they must verify that their provider's documentation is adequate.
Section 1: General Description of the AI System
- The system's intended purpose
- The name and version of the provider
- How the system interacts with hardware or software that is not part of the system itself
- The versions of relevant software or firmware
- A description of the forms in which the system is placed on the market (e.g., software package, API, SaaS)
- A description of the hardware on which the system is intended to run
- Where the system is a component of a product, photographs or illustrations showing external features, markings, and internal layout
Section 2: Detailed Description of System Elements and Development Process
- The methods and steps taken to develop the AI system, including the use of pre-trained systems or third-party tools
- Design specifications, including the general logic of the AI system and algorithms
- Key design choices, including the rationale and assumptions made
- The system architecture explaining how software components build on or feed into each other
- Computational resources used in development, training, testing, and validation
- A description of the data requirements (data sheets, training methodologies, data sets used)
- An assessment of the human oversight measures needed, as per Article 14
- If applicable, a description of any predetermined changes to the system and its performance
Section 3: Monitoring, Functioning, and Control
- A description of the system's capabilities and limitations in performance
- The expected levels of accuracy, robustness, and cybersecurity, as per Article 15
- Known or foreseeable circumstances that may lead to risks to health, safety, or fundamental rights
- Technical measures for human oversight, including the tools and methods for human interpretation of AI outputs
- Input data specifications (format, source, scope)
- Relevant information about the training, validation, and testing data sets used
Section 4: Risk Management
- A description of the risk management system implemented, as per Article 9
- The identification and analysis of known and foreseeable risks
- The risk management measures adopted and how residual risks are addressed
- Testing procedures to ensure the system performs consistently for its intended purpose
Section 5: Changes and Lifecycle
- A description of all changes made to the system throughout its lifecycle
- The post-market monitoring system and plan
- Records of communication with national competent authorities and notified bodies
This is extensive. For an SMB that deploys but does not build high-risk AI systems, the good news is that your provider is responsible for most of this documentation. But you must verify it exists and obtain the portions relevant to your deployment.
The Seven Documents Every Deployer Needs
As a deployer — which includes any SMB using a high-risk AI system — you need these seven documents at minimum.
Document 1: AI System Registry
A central register of all AI systems in your organization. For each system, record:
- System name, version, and provider
- Risk classification (with reasoning)
- Date of deployment
- Intended purpose and actual use within your organization
- Department and individuals responsible for oversight
- Data processed by the system (types, sources, volumes)
- Integration points with other systems
- Review and update history
This is the foundation document. Everything else builds on it.
Document 2: Risk Assessment for Each High-Risk System
For each high-risk system, document your risk assessment:
- The Annex III category under which the system is classified
- Specific risks identified for your deployment context
- Mitigation measures you have implemented
- Residual risks and their acceptability
- Whether the Article 6(3) exception was considered and why it was or was not applied
This assessment should be reviewed at least annually or whenever significant changes occur to the system or how you use it.
Document 3: Fundamental Rights Impact Assessment
Article 27 requires deployers of certain high-risk AI systems to conduct a fundamental rights impact assessment before deployment. This applies specifically to deployers that are bodies governed by public law, private entities providing public services, and deployers of AI systems referred to in Annex III points 5(b) and 5(c) (credit scoring and insurance risk assessment).
Even if Article 27 does not strictly apply to your organization, conducting a fundamental rights impact assessment is good practice for any high-risk deployment. Document:
- Which fundamental rights may be affected (privacy, non-discrimination, dignity, etc.)
- The scale and scope of the AI system's impact
- Specific risks to affected groups, especially vulnerable populations
- Measures taken to mitigate fundamental rights impacts
- Mechanisms for affected individuals to complain or seek redress
Document 4: Human Oversight Procedures
For each high-risk AI system, document your human oversight arrangements:
- Who is responsible for overseeing the system (named individuals and their qualifications)
- What training these individuals have received
- When and how they review AI outputs before decisions are implemented
- The process for overriding or disregarding AI recommendations
- Escalation procedures when the system behaves unexpectedly
- How oversight effectiveness is monitored and evaluated
Human oversight is not just about having a person in the loop. It means that person is competent, empowered to override the system, and actually reviews outputs in a meaningful way. Document how you ensure this.
Document 5: Monitoring and Logging Records
Article 12 requires that high-risk AI systems enable automatic logging of events. As a deployer, you must:
- Maintain these logs for at least six months (unless sector-specific legislation requires longer)
- Record any anomalies, errors, or unexpected outputs you observe
- Document any corrective actions taken in response to monitoring findings
- Keep records of system performance over time, including any degradation
Structure your monitoring records with timestamps, descriptions, severity assessments, and actions taken. This creates an audit trail that demonstrates active governance.
Document 6: Incident Reports
When things go wrong — an AI system produces a discriminatory outcome, causes harm, or malfunctions significantly — you need documented incident response:
- What happened, when, and who was affected
- The root cause (if known) or investigation status
- Immediate corrective actions taken
- Longer-term preventive measures
- Whether the incident was reported to the provider
- Whether the incident constitutes a "serious incident" under Article 73, requiring notification to market surveillance authorities
Serious incidents must be reported immediately after the deployer establishes a causal link between the AI system and the incident, and no later than 15 days after becoming aware of the incident.
Document 7: Transparency and Information Records
Document how you meet your transparency obligations:
- How affected individuals are informed that they are subject to AI-driven decisions
- What information is provided to them about the system's purpose and operation
- Where and how this information is made accessible
- Records of any requests for information from affected individuals and how they were handled
Common Documentation Mistakes
Mistake 1: Creating Documents and Never Updating Them
Documentation is not a one-time exercise. The EU AI Act requires living documents that reflect the current state of your AI systems. A risk assessment from twelve months ago is inadequate if your AI tools have been updated, your usage has changed, or new risks have emerged.
Set a review schedule. Most documents should be reviewed at least every six months, with ad-hoc updates triggered by significant changes. Record every review, even if no changes are made — "reviewed on [date], no changes required" is itself valuable documentation.
Mistake 2: Writing for Lawyers Instead of Regulators
Technical documentation should be clear, specific, and structured. Regulators are looking for evidence of genuine compliance, not legal hedging. Write in plain language, use concrete examples, and organize information so a reviewer can find what they need quickly.
Every claim should be backed by evidence. If you say "human oversight is ensured," the follow-up question is "how, by whom, and where is the evidence?" Your documentation should preemptively answer these questions.
Mistake 3: Keeping Documents in Scattered Locations
If your AI system registry lives in a spreadsheet, your risk assessments are in Word documents on someone's desktop, your training records are in an HR system, and your monitoring logs are in email threads, you do not have a documentation system — you have a compliance liability.
Centralise your documentation. When a regulator asks for evidence of compliance, you need to produce a coherent, complete package — not spend days hunting through file shares and inboxes.
Mistake 4: Ignoring Version Control
AI systems change over time. Your documentation must track these changes. If an AI-powered recruitment tool receives a major update from the vendor, your existing risk assessment and human oversight procedures may no longer be accurate.
Maintain version histories for all documents. Record what changed, when, why, and who approved the change. This is not just good practice — it is how you demonstrate that your compliance is active and ongoing, not a snapshot that has gone stale.
Mistake 5: Not Documenting Negative Decisions
Sometimes the most important documentation is about what you decided NOT to do. If you evaluated an AI tool and decided against deploying it because it was high-risk and the compliance burden was not justified, document that decision. It shows your organization takes classification seriously.
Similarly, if you considered the Article 6(3) exception and decided it did not apply, document the reasoning. This is far better than having no record of the analysis at all.
Mistake 6: Treating Provider Documentation as Sufficient
If you deploy a third-party high-risk AI system, the provider's technical documentation does not satisfy your deployer obligations. You need your own documentation covering how you use the system, your oversight procedures, your risk assessment in your specific context, and your monitoring records. The provider's documentation is an input to yours, not a replacement for it.
Building an Approval Workflow
Documentation needs governance. Not just creation, but review, approval, and controlled distribution. Here is a practical workflow for SMBs.
Roles
- Document owner — The person responsible for keeping a specific document accurate and current. Usually the person closest to the AI system in question.
- Reviewer — A second set of eyes who checks the document for completeness and accuracy. Ideally someone with compliance knowledge.
- Approver — A senior person (department head, compliance lead, or director) who formally signs off. Their approval means the organization stands behind the document's contents.
Process
- Draft — Document owner creates or updates the document based on current facts.
- Review — Reviewer checks for completeness against the regulatory requirements, flags gaps or inaccuracies.
- Revise — Document owner addresses review comments.
- Approve — Approver formally signs off, confirming the document is complete, accurate, and ready for regulatory scrutiny.
- Publish — Document is stored in the central documentation system with version number, date, and approver's name.
- Schedule review — Set the next review date based on the document type and risk level.
For an SMB with five employees, this can be lightweight — the owner writes it, a colleague reviews it, and the director approves it. The process itself matters less than the fact that it exists, is followed, and is documented.
Retention Periods
The EU AI Act specifies different retention periods depending on your role:
- Providers: Technical documentation must be kept for ten years after the high-risk AI system is placed on the market or put into service (Article 18).
- Deployers: Logs automatically generated by high-risk AI systems must be kept for a minimum of six months, unless otherwise provided by applicable Union or national law (Article 26(6)).
- General records: While the regulation does not specify a universal retention period for all deployer documentation, maintaining records for the duration of the AI system's use plus a reasonable period after decommissioning (at least five years) is prudent.
Store documentation in durable formats. Paper records in filing cabinets are technically compliant but impractical. Digital records in a centralized, backed-up system with access controls are the standard expectation.
The Cost of Getting Documentation Wrong
Poor documentation does not just create regulatory risk. It creates business risk.
- During an audit: If you cannot produce required documentation, it is treated as non-compliance — even if your actual practices are sound. The fine for failing to comply with high-risk deployer obligations can reach EUR 15 million or 3% of turnover.
- During an incident: If your AI system causes harm and you have no documentation of your oversight, risk management, or monitoring, you lose every defence available to you.
- During procurement: As EU AI Act awareness grows, large enterprises and public bodies will require AI Act compliance evidence from their suppliers. Without proper documentation, you lose deals.
- During due diligence: Investors, acquirers, and partners will increasingly assess AI governance maturity. Documented compliance adds tangible business value.
How AktAI Generates Documentation Automatically
Creating and maintaining seven categories of compliance documentation across multiple AI systems is exactly the kind of work that scales poorly when done manually. AktAI was built to solve this problem.
- Automatic document generation — When you register an AI system and complete the classification process, AktAI generates draft documentation tailored to your specific deployment. Risk assessments, human oversight templates, and transparency records are populated based on the information you provide about how you use each system.
- Centralized document registry — Every document lives in one place, organized by AI system and document type. No more scattered files across drives and inboxes.
- Version control built in — Every edit is tracked with timestamps, author information, and change descriptions. You can see the complete history of any document at a glance.
- Review and approval workflows — Assign reviewers and approvers, track sign-off status, and receive reminders when documents are due for review.
- Gap analysis — AktAI continuously checks your documentation against the regulatory requirements for each AI system's risk level and flags what is missing or outdated.
- Export for audits — Generate a complete compliance package for any AI system or your entire organization in one click. Formatted, organized, and ready for regulatory review.
- AI-powered drafting — For SMBs without in-house compliance expertise, AktAI's AI assistant helps draft documentation in clear, regulator-ready language based on your inputs about each system.
Documentation is the most time-consuming part of EU AI Act compliance. It is also the part where automation delivers the most value. What takes weeks of manual work can be reduced to hours with the right tooling.
See what documentation you are missing. Run our free compliance assessment to get a personalized gap report covering all seven deployer document categories — then let AktAI help you close the gaps.