Risk classification, documents, and obligations
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legislation governing artificial intelligence. Adopted in August 2024, it establishes a risk-based framework for AI systems used in or affecting the EU.
For a comprehensive introduction, we recommend our free AI Act 101 learning path at /learn.
The AI Act uses four risk tiers:
1. **Unacceptable Risk**: Banned practices (social scoring, manipulative AI, certain biometric surveillance) 2. **High Risk**: Strict compliance requirements (employment AI, credit scoring, education systems) 3. **Limited Risk**: Transparency obligations (chatbots, deepfakes) 4. **Minimal Risk**: Voluntary codes of conduct (spam filters, basic recommendations)
AktAI's classification engine uses RAG (Retrieval-Augmented Generation) to accurately classify your systems based on the actual regulation text.
Article 4 of the EU AI Act requires ALL organizations to ensure that their staff and anyone operating AI systems on their behalf have sufficient AI literacy. This obligation applied from February 2, 2025.
AktAI provides built-in training modules that satisfy Article 4 requirements, with completion tracking and certificates.
The following AI practices are completely banned under the EU AI Act:
- Social scoring by governments - Manipulation exploiting vulnerabilities - Real-time remote biometric identification in public spaces (with limited law enforcement exceptions) - Emotion recognition in workplaces/schools - Untargeted facial image scraping - Predictive policing based solely on profiling
Use our free Prohibited Practices Checker at /check to verify your AI systems.
A FRIA is required for deployers of high-risk AI systems in certain sectors including public services, banking, insurance, and healthcare. It assesses the potential impact on fundamental rights such as non-discrimination, privacy, and human dignity.
AktAI auto-generates FRIAs based on your system details, which you can then review and customize.
Key enforcement dates:
- Feb 2, 2025: Prohibited practices ban + AI literacy (Article 4) - Aug 2, 2025: Governance structure + general-purpose AI codes - Aug 2, 2026: FULL ENFORCEMENT of all remaining provisions - Aug 2, 2027: Extended deadline for Annex I high-risk systems
Three penalty tiers:
- Tier 1: Up to EUR 35M or 7% global turnover (prohibited practices) - Tier 2: Up to EUR 15M or 3% global turnover (other violations) - Tier 3: Up to EUR 7.5M or 1% global turnover (misinformation to authorities)
Fines are proportional for SMBs but can still be significant.
Providers (who build AI) have the heaviest obligations: conformity assessments, technical documentation, post-market monitoring, and more.
Deployers (who use AI built by others) have lighter but still important obligations: human oversight, logging, transparency, AI literacy, and use according to provider instructions.
Most SMBs are deployers.