The EU AI Act (Regulation 2024/1689) is a landmark piece of legislation that creates a unified legal framework for artificial intelligence across all 27 EU member states. It was officially adopted in August 2024 and establishes rules for how AI systems must be developed, deployed, and used.
Think of it like GDPR, but for AI. Just as GDPR set global standards for data protection, the AI Act is expected to influence AI regulation worldwide.
AI systems are increasingly making decisions that affect people's lives — from hiring and lending to healthcare and law enforcement. The EU AI Act was created to:
- Protect fundamental rights and safety of EU citizens - Create legal certainty for businesses developing or using AI - Establish a risk-based approach (not all AI is treated the same) - Foster innovation while preventing harmful uses of AI - Build public trust in AI technology
The AI Act applies to anyone who develops, provides, or uses AI systems that affect people in the EU, regardless of where the company is based. This means:
- EU-based companies using any AI system - Non-EU companies if their AI system affects EU citizens - Both "providers" (who build AI) and "deployers" (who use AI) - Importers and distributors of AI systems in the EU market
If your business uses AI tools like ChatGPT, automated hiring software, or AI-powered customer service — this law applies to you.
**AI System**: Software that generates outputs like predictions, recommendations, decisions, or content that can influence its environment. This includes machine learning, deep learning, and statistical approaches.
**Provider**: The entity that develops an AI system or has one developed for them, and places it on the market or puts it into service.
**Deployer**: Any person or organization that uses an AI system under their authority (this is most businesses).
**High-Risk AI System**: An AI system that poses significant risks to health, safety, or fundamental rights. These face the strictest requirements.
When was the EU AI Act officially adopted?