EU AI Act Developments: Key Political Agreement on the Digital Omnibus on AI, Implementation Timeline and Transparency Consultation
Authors
On 8 May 2026, the EU Commission opened its consultation on the draft guidelines on AI transparency obligations under Article 50 of the AI Act, providing practical guidance to help authorities, providers, and deployers of AI systems to comply with their relevant obligations in a consistent, effective, and uniform way. The consultation period will run until 3 June 2026. Parliament is expected to vote on the final text by 7 July 2026
The Commission’s decision came a day after the Council and the European Parliament reached a preliminary agreement on key points of the proposed Digital Omnibus on AI to simplify rules of the AI Act and strengthen Europe’s competitiveness.
Digital Omnibus on AI
The key points on the Digital Omnibus on AI reached by the Council and European Parliament include the following:
- New prohibited AI practices, including creating non-consensual intimate images and child sexual abuse material.
- Maintaining the requirement for providers to register AI systems in an EU database if they claim their system is not high-risk.
- Allowing the use of special categories of personal data for bias detection and correction only when less intrusive data (e.g. anonymised or synthetic data) is not enough.
- Shortening the deadline for adding transparency measures to AI-generated content to 2 December 2026. Providers must add labels or technical markers so content can be identified as AI-generated or manipulated.
- Setting exceptions where national authorities (not the AI Office) will oversee AI systems based on general-purpose AI models in law enforcement, border control, courts, and financial institutions when the same provider develops both the model and the system.
- Allowing the AI Act to be adjusted through delegated acts when other sector-specific laws already cover similar AI requirements (e.g. for medical devices, machinery or toys). By 2 August 2027, the Commission can specify which products are affected and which AI Act rules will not apply.
- Removing the EU Machinery Regulation from the AI Act’s direct scope. This means AI in machinery will not have to follow all AI Act high-risk rules. Instead, the Commission will add AI-specific safety rules under the Machinery Regulation. This avoids overlap with existing laws. AI in medical devices and radio equipment, however, is still covered by the AI Act and must meet both AI Act and sector-specific requirements.
- Requiring the Commission to issue guidance to help companies in sectors like medical devices, toys, lifts, and watercraft comply with high-risk AI rules. The goal is to reduce extra burden by showing how to combine AI Act requirements (e.g. risk and quality management) with existing sector rules. These guidelines should be published by 1 August 2027.
Deadlines
The timelines for AI Act compliance is as follows:
- 2 December 2026:
- AI systems that generate content (e.g. audio, images, video, text) must include labels or markers showing it is AI-generated.
- New bans apply to AI that creates non-consensual intimate images or child sexual abuse material.
- 1 August 2027:
- The Commission must publish guidance to help companies in sectors like medical devices, toys, lifts, and watercraft adhere to high-risk AI rules without duplicating requirements.
- 2 August 2027:
- Countries must set up AI regulatory sandboxes (i.e. testing environments).
- The Commission must clarify where some AI Act rules may not apply to certain high-risk systems.
- 2 December 2027:
- High-risk rules begin to apply to standalone AI systems.
- 2 August 2028:
- High-risk rules begin to apply to AI built into products.
- New AI-related rules under the Machinery Regulation take effect.
- 2 August 2030:
- Older high-risk AI systems used by public authorities must comply with the AI Act by this date.
For more information on developments in AI regulation in the EU and how this might affect your business, contact your CMS client partner or the CMS experts who contributed to this article.