AI Transparency Requirements Under the EU AI Act: Article 50 Explained
Complete guide to AI transparency obligations under the EU AI Act. Covers Article 50 chatbot disclosure, deepfake labeling, synthetic content marking, and transparency for high-risk systems.
Transparency Is the Most Broadly Applicable Obligation
When most people think about the EU AI Act, they think about high-risk AI systems and the heavy compliance burden they carry. But the transparency requirements affect a far wider range of organizations. Even if your AI systems are not high-risk, if they interact with people, generate content, or produce outputs that could be mistaken for human-created work, you have transparency obligations.
Article 50 of the EU AI Act establishes transparency requirements for specific categories of AI systems regardless of their risk level. Article 13 adds additional transparency obligations for high-risk systems specifically. Together, they create a transparency framework that touches nearly every organization using AI in the EU.
This guide breaks down exactly what is required, who it applies to, and how to implement it in practice.
Article 50: Transparency for Specific AI Systems
Article 50 identifies four categories of AI systems that carry specific transparency obligations, independent of their risk classification. These are sometimes called "limited risk" obligations, but that label can be misleading — they apply even to high-risk systems that also fall into these categories.
Category 1: AI Systems That Interact Directly with Persons
The requirement (Article 50(1)): Providers of AI systems intended to interact directly with natural persons must design the system so that the persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use.
What this means in practice: If your organization deploys a chatbot, virtual assistant, AI-powered phone system, or any other AI system that communicates directly with people, those people must be told they are interacting with AI. This applies to customer service chatbots, AI-powered email responders, automated phone systems with natural language capabilities, and interactive AI agents on websites.
The "obvious from the circumstances" exception: The regulation acknowledges that sometimes it is self-evident that someone is interacting with AI. A clearly labeled chatbot widget on a website with a robot icon and "AI Assistant" branding may satisfy the requirement through its design alone. However, if there is any ambiguity — for example, an AI that responds to emails in a way that could be mistaken for a human response — you must add explicit disclosure.
Implementation guidance:
- Place a clear disclosure before or at the start of the interaction, not buried in terms of service
- Use plain language: "You are interacting with an AI system" or "This conversation is powered by artificial intelligence"
- Make the disclosure persistent — if a conversation spans multiple sessions, the disclosure should be visible in each session
- For phone-based AI, the disclosure should be spoken at the beginning of the call
- For AI that generates email or message responses, include a visible indicator in the communication
The deployer angle: Note that this obligation falls on providers (those who develop and place the AI system on the market), but deployers (organizations that use the AI system) are also responsible for ensuring the transparency requirements are met in their specific deployment context. If you deploy a third-party chatbot, you must verify that it meets the disclosure requirements as configured in your environment.
Category 2: AI Systems That Generate Synthetic Content
The requirement (Article 50(2)): Providers of AI systems that generate synthetic audio, image, video, or text content must ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated.
What this means in practice: If your AI system produces content — images, audio, video, or text — that content must carry machine-readable markers indicating it was AI-generated. This is separate from any visible labeling (which is covered under Category 4 below). The requirement here is specifically about embedding metadata or watermarks that automated systems can detect.
Scope: This covers a vast range of AI applications:
- AI image generators (marketing images, product visualizations, social media content)
- AI text generators (marketing copy, reports, summaries, customer communications)
- AI audio generators (voiceovers, podcasts, customer service audio)
- AI video generators (marketing videos, training content, synthetic presentations)
- Any system that creates content that could be perceived as human-created
Technical implementation: The regulation does not specify the exact technical mechanism, but industry standards are emerging. For images, C2PA (Coalition for Content Provenance and Authenticity) metadata is becoming the de facto standard. For text, watermarking techniques that embed statistical signatures into generated text are developing rapidly. For audio and video, a combination of metadata embedding and steganographic watermarking is used.
The text exception: Article 50(2) includes a carve-out for AI-generated text that is "published with the purpose of informing the public on matters of public interest," provided it undergoes a human editorial process and a natural or legal person holds editorial responsibility. This means that news organizations using AI to assist with article writing, where a human editor reviews and takes responsibility for the final content, may be exempt from the machine-readable marking requirement — though the human editorial oversight must be genuine, not nominal.
Category 3: Emotion Recognition and Biometric Categorization Systems
The requirement (Article 50(3)): Deployers of emotion recognition systems or biometric categorisation systems must inform the persons exposed to such systems about their operation and process personal data in accordance with applicable EU law.
What this means in practice: If you deploy AI that attempts to detect emotions (from facial expressions, voice patterns, body language, or physiological signals) or categorize people based on biometric data (such as inferring demographic characteristics from physical features), you must tell the people being analyzed.
Important context: Many emotion recognition uses are already banned under Article 5 (prohibited practices), specifically emotion recognition in the workplace and in educational institutions. The transparency requirement in Article 50(3) applies to the remaining lawful uses of emotion recognition, such as in certain healthcare or accessibility contexts.
For most organizations: If you are not deliberately deploying emotion recognition or biometric categorization AI, this category may not apply to you. However, check whether any of your AI systems have these capabilities as a secondary feature. Some video analytics platforms, for example, may include emotion detection features that you might not have activated but that are technically present.
Category 4: Deep Fakes and Manipulated Content
The requirement (Article 50(4)): Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.
What this means in practice: If you use AI to create content that "appreciably resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful," you must label it as AI-generated or manipulated. This is the visible, human-readable counterpart to Category 2's machine-readable marking requirement.
The breadth of "deep fake": The EU AI Act's definition of deep fake is broader than the colloquial understanding. It is not limited to face-swapping videos of politicians. It covers any AI-generated or AI-manipulated content that could be mistaken for real, including:
- AI-generated images of people who do not exist (common in marketing)
- AI-altered product photos that change the appearance of real products
- AI voice cloning used in customer communications
- AI-generated video testimonials or demonstrations
- Synthetic media used in training materials that depicts realistic scenarios
Labeling requirements: The disclosure must be clear and visible. For images, this typically means a watermark or label visible on the image itself plus metadata. For video, an on-screen label at the start and ideally throughout. For audio, a spoken disclosure at the beginning. For text that constitutes a "deep fake" narrative, a clear label identifying it as AI-generated.
The artistic and satirical exception: Article 50(4) includes an exception for content that is part of "an evidently artistic, creative, satirical, fictional, or analogous work or programme." However, this exception requires appropriate safeguards that do not interfere with the rights of third parties. AI-generated parody content may be exempt from labeling, but AI-generated content that could damage a real person's reputation is not.
Article 13: Transparency for High-Risk AI Systems
Beyond Article 50's requirements for specific system types, Article 13 imposes additional transparency obligations specifically on high-risk AI systems. These are more demanding and more detailed.
What Article 13 Requires
High-risk AI systems must be designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. This includes providing:
Instructions for use that include:
- The identity and contact details of the provider
- The system's characteristics, capabilities, and limitations of performance, including the degree of accuracy and the known circumstances that might affect performance
- The intended purpose and any conditions of reasonably foreseeable misuse
- Changes that have been pre-determined and assessed for compliance
- The human oversight measures built into the system
- The expected lifetime of the system and necessary maintenance measures
- The computational and hardware resources needed
Performance information including:
- The levels of accuracy for specific persons or groups on which the system is intended to be used
- The relevant accuracy metrics, including the degree of accuracy for specific groups
- Known or foreseeable biases
Interpretability requirements: Deployers must be able to understand the system's outputs well enough to use them appropriately. This does not mean full algorithmic transparency (the regulation does not require disclosing proprietary model architectures), but it does mean that the deployer must understand what the output means, how confident it is, and when it should not be relied upon.
How Article 13 and Article 50 Interact
An AI system can be subject to both Article 13 and Article 50 simultaneously. A high-risk AI chatbot used for credit counseling, for example, must satisfy:
- Article 50(1): Inform the user they are interacting with AI
- Article 13: Provide comprehensive transparency documentation to the deploying organization
- Article 9: Risk management transparency as part of the broader risk management system
The obligations layer on top of each other. Satisfying Article 50 does not exempt you from Article 13, and vice versa.
Provider vs. Deployer Obligations
Understanding who is responsible for what is essential for compliance planning.
Provider Obligations
Providers — the organizations that develop or place AI systems on the market — bear the primary transparency obligations. They must:
- Design systems to meet disclosure requirements (Article 50(1))
- Implement machine-readable content marking (Article 50(2))
- Create comprehensive instructions for use (Article 13)
- Provide deployers with all information needed to meet their own obligations
- Maintain documentation that demonstrates compliance
Deployer Obligations
Deployers — the organizations that use AI systems in their operations — have their own transparency duties. They must:
- Inform persons about emotion recognition and biometric categorization (Article 50(3))
- Label deep fakes and manipulated content (Article 50(4))
- Ensure that provider-designed transparency measures are properly implemented in their deployment context
- Conduct a fundamental rights impact assessment for high-risk systems (Article 27), which includes transparency considerations
- Make the provider's instructions for use available to relevant staff
The Practical Challenge
The division of responsibility creates a practical challenge for deployers who use third-party AI systems. You need to verify that your providers have built in the necessary transparency features and that those features work in your deployment context.
Steps for deployers:
- Review your AI system providers' documentation for Article 50 compliance evidence
- Verify that chatbot disclosures appear correctly in your branded deployment
- Confirm that synthetic content carries the required machine-readable markers
- Test that transparency features work in your specific use case and do not degrade user experience to the point where they are effectively invisible
- Document your verification process
Use a free assessment to evaluate whether your AI deployments meet the transparency requirements across all applicable articles.
Implementing Transparency in Practice
For Customer-Facing AI
Customer-facing AI is the most visible area where transparency failures will be noticed — by customers, competitors, and regulators.
Chatbots and virtual assistants:
Design the disclosure into the interaction flow, not as an afterthought. The most effective approach is to have the AI system itself introduce its nature at the start of every interaction. For example: "Hello, I am an AI assistant. I can help you with [scope]. For complex issues, I can connect you with a human agent."
Avoid hiding the disclosure in small print or behind clicks. The regulation requires that persons "are informed" — passive availability of information (such as a footnote in terms of service) is unlikely to satisfy this requirement. Active disclosure is the safer approach.
AI-generated marketing content:
If you use AI to generate product images, marketing copy, or social media content, implement a systematic labeling process. This includes:
- Adding C2PA metadata to all AI-generated images before publication
- Including visible "AI-generated" labels on synthetic images used in advertising
- Maintaining records of which content was AI-generated and which was human-created
- Training marketing teams on labeling requirements for different content types and platforms
Customer communications:
If AI drafts or generates customer emails, letters, or messages, consider whether the recipient would reasonably believe they are communicating with a human. If yes, disclosure is required. A practical approach is to include a brief note: "This message was generated with AI assistance" or to use clearly branded AI communication templates.
For Internal AI Systems
Even internal AI systems may trigger transparency obligations if they affect employees or other individuals.
HR and recruitment AI: If AI assists with CV screening, candidate assessment, or employee evaluation, the affected individuals must be informed. This applies under both the AI Act (Article 50(1) if the system interacts directly with candidates, and Article 13 for high-risk employment AI) and GDPR (Articles 13-14, information obligations).
Internal analytics: AI systems used for workforce analytics, productivity monitoring, or internal risk scoring must be transparent to the employees they analyze. Even if these are not high-risk under the AI Act (though many will be, under Annex III Category 4 on employment), transparency is good practice and often required by employment law.
For High-Risk Systems Specifically
High-risk AI systems require a more structured transparency approach because the Article 13 requirements are more detailed.
Create a transparency dossier for each high-risk system that includes:
- A plain-language description of what the system does and how it reaches its outputs
- Documented accuracy metrics, including performance across different user groups
- Known limitations and circumstances that may affect performance
- A description of the human oversight mechanisms and how deployers should exercise them
- Contact information for the provider for questions and issue reporting
This dossier is distinct from the technical documentation required under Article 11 (which is more detailed and technical). The transparency dossier under Article 13 is specifically designed to enable deployers to understand and appropriately use the system.
Review your transparency documentation alongside your broader compliance documentation to ensure consistency.
The Connection to Consumer Trust
Transparency is not purely a compliance exercise. Research consistently shows that consumers trust AI systems more when they know AI is involved and understand how it works. Organizations that embrace transparency proactively — rather than treating it as a regulatory burden — tend to see better customer acceptance of their AI deployments.
The EU AI Act's transparency requirements align with what consumers increasingly expect. A 2025 Eurobarometer survey found that 82% of EU citizens want to know when they are interacting with AI, and 76% want AI-generated content to be clearly labeled. Meeting the regulatory requirements also meets customer expectations.
For retail and consumer-facing organizations, the transparency requirements can be turned into a competitive advantage. Being transparent about AI use signals trustworthiness. Explore the retail compliance guide for sector-specific approaches.
Penalties for Transparency Violations
Non-compliance with Article 50 transparency obligations can result in fines of up to EUR 15 million or 3% of global annual turnover, whichever is higher. For SMBs, the regulation provides for proportionate enforcement, but the penalties are still significant.
More practically, transparency violations are among the easiest to detect and prove. A regulator or consumer can simply interact with your chatbot and note whether they were informed it was AI. They can examine your marketing images for AI-generated content labels. They can test whether your synthetic content carries machine-readable markers. Unlike compliance with Article 9's risk management requirements (which requires examining internal documentation), Article 50 compliance is visible from the outside.
This makes transparency compliance a priority not just because of the legal requirements, but because violations are highly detectable.
Timeline and Quick Wins
Article 50's transparency requirements take full effect on August 2, 2026, alongside the rest of the high-risk AI obligations. However, many of these measures are straightforward to implement and deliver immediate benefits.
Quick wins you can implement now:
- Audit your chatbots — Add clear AI disclosure to every chatbot, virtual assistant, and automated communication system. This takes hours, not months.
- Label your AI-generated content — Implement a tagging system for marketing and communications teams to track and label AI-generated content. Start with visible labels and add machine-readable metadata as standards mature.
- Review your AI inventory — Use the discovery tool to identify AI systems in your organization that interact with people or generate content. You cannot be transparent about systems you do not know about.
- Update your privacy notices — If you deploy emotion recognition or biometric categorization AI (that is not prohibited), update your privacy notices and user-facing disclosures.
- Train your teams — Ensure that marketing, customer service, and product teams understand which AI-generated content needs labeling and how to apply it.
For a comprehensive view of your transparency obligations alongside all other AI Act requirements, start with a free assessment. For practical guidance on building a governance framework that embeds transparency into your AI operations, see our guide on AI governance frameworks for SMBs.
Transparency is not the hardest part of AI Act compliance, but it is the most visible. Get it right and you build trust with customers and regulators. Get it wrong and you become an easy enforcement target. The good news is that most transparency measures are straightforward to implement once you know what is required.