The EU AI Act does not treat all AI the same. It uses a four-tier risk classification system. The higher the risk, the stricter the rules. This means a simple chatbot faces far fewer requirements than an AI system used for hiring decisions.
The four risk tiers are: Unacceptable (banned), High Risk, Limited Risk, and Minimal Risk.
These AI practices are completely banned in the EU:
- **Social scoring** by governments (like China's social credit system) - **Manipulative AI** that exploits vulnerabilities of specific groups - **Real-time biometric surveillance** in public spaces (with narrow exceptions for law enforcement) - **Emotion recognition** in workplaces and educational institutions - **Untargeted scraping** of facial images from the internet - **Predictive policing** based solely on profiling
If your business uses any of these, you must stop immediately. Violations carry fines of up to EUR 35 million or 7% of global turnover.
High-risk AI systems face the strictest compliance requirements. These include AI used in:
- **Employment**: Resume screening, candidate ranking, performance evaluation - **Education**: Student assessment, admission decisions - **Financial services**: Credit scoring, insurance pricing, fraud assessment - **Healthcare**: AI-assisted diagnosis, treatment planning - **Law enforcement**: Risk assessment tools, evidence evaluation - **Critical infrastructure**: Energy, water, transport management - **Migration**: Visa processing, border control
High-risk systems require: risk management systems, data quality controls, technical documentation, human oversight, accuracy & robustness testing, and registration in the EU database.
**Limited Risk** AI systems have transparency obligations only. Users must be informed they are interacting with AI. This includes: - Chatbots (must disclose they are AI) - Deepfake generators (content must be labeled) - Emotion recognition systems (users must be notified)
**Minimal Risk** AI systems have no specific obligations under the AI Act, though voluntary codes of conduct are encouraged. Examples include: - Spam filters - AI-powered search engines - Inventory management systems - Basic recommendation engines
An AI system used for screening job applications is classified as: