Open navigation
Search
Offices – Germany
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
CMS Germany Abroad
Insights
About CMS

Select your region

Publication 03 Dec 2025 · Germany

AI Act – new transition periods and German market surveillance

Adjusted timelines and Germany’s AI market surveillance framework

5 min read

On this page

The European Commission is planning to adjust the transition periods set out in the AI Act. Germany will also implement a national market surveillance system for AI systems. What companies now need to know. 

Digital omnibus: new deadlines for high-risk AI systems 

With the adoption of Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act), the EU has imposed comprehensive rules on AI systems and general-purpose AI models. 

The first provisions on establishing AI literacy among providers and professional users (deployers) of all AI systems (Article 4 AI Act) and on prohibited AI practices (Article 5 AI Act) have been in effect since 2 February 2025. 

On 19 November 2025, the European Commission introduced the digital omnibus (Digital omnibus on AI), with which it intends to adapt and simplify various EU digital laws, including the AI Act. For example, the obligation of providers and deployers to teach AI literacy to those who use AI systems on their behalf, which has been in effect since 2 February 2025 under the AI Act, is to be dropped. The responsibility for ensuring AI literacy is to be transferred to the European Commission and the Member States instead as a task under their overall policies. 

According to the current version of the AI Act, in addition to important transparency obligations for the providers and deployers of low-risk AI systems, the provisions for high-risk AI systems in particular will apply from 2026. According to the digital omnibus, the date when these provisions go into effect is to be postponed. 

Use-related high-risk AI systems: Obligations apply from 2 December 2027 at the latest 

High-risk AI systems can pose significant risks to people's fundamental rights, health and safety, which is why the AI Act imposes comprehensive obligations on the providers and deployers of such AI systems. The AI Act distinguishes between two categories for this purpose: 

  • high-risk AI systems which are themselves products or safety components of products that fall under the product safety harmonisation legislation in Annex I AI Act
  • high-risk AI systems that are used in specific use-related contexts listed in Annex III AI Act, such as in the fields of critical infrastructure and personnel management. 

To date, the AI Act stipulates that providers and deployers in the second category must comply with comprehensive requirements from 2 August 2026, particularly with regard to data governance, documentation, transparency, robustness and cybersecurity. The digital omnibus provides for the obligations for use-related high-risk AI systems not to go into effect until generally six months after the European Commission issues a decision confirming that adequate measures are available to demonstrate compliance (for example, harmonised standards or common specifications). If such measures are not published on time, the aforementioned provisions must be observed from 2 December 2027 at the latest. 

Non-compliance with the requirements can result in fines of up to EUR 15 million. 

Transparency provisions for providers and deployers of low-risk AI systems 

In addition to the provisions for high-risk systems, the AI Act contains rules for AI systems that pose limited risks, such as AI-based chatbots and AI systems that generate creative content. 

For example, providers of generative AI systems must ensure in accordance with Article 50 (2) AI Act that the content generated with AI is marked in a machine-readable format and detectable as artificially generated or manipulated. 

While the AI Act as it exists requires this from 2 August 2026, according to the digital omnibus, providers of generative AI systems that were already available on the EU internal market before 2 August 2026 will not have to comply with the obligations under Article 50 (2) AI Act until 2 February 2027. Providers of systems released after 2 August 2026 must comply with the transparency obligations from that date. 

From 2 August 2026, deployers of AI systems that produce content which misleads the audience into believing that people have made certain statements or taken certain actions ("deepfakes") will be required to disclose that the content has been artificially generated or manipulated. Furthermore, deployers must ensure that AI-generated texts that inform the public about matters of public interest (e.g. AI-generated political statements by a company) are clearly marked as AI-generated. 

Violations of the aforementioned transparency obligations are also punishable by fines of up to EUR 15 million. 

Market surveillance in Germany: the future role of the Federal Network Agency 

Market surveillance of AI systems is organised in two parts in accordance with the AI Act. The surveillance of general-purpose AI models is the responsibility of the AI Office established within the EU  Commission. However, national market surveillance and notification authorities are responsible for monitoring compliance with provisions on prohibited AI practices and on high-risk and limited-risk AI systems. 

Germany is planning to implement market surveillance using the AI Market Surveillance and Innovation Promotion Act (KI-MIG), which is currently in the draft bill stage. According to this bill, the Federal Network Agency (BNetzA) is to be established as the central market surveillance authority for all AI systems that are not already subject to specialised statutory supervision. 

Summary and outlook 

Binding provisions for the use of AI, in particular transparency requirements, will apply from 2 August 2026, especially for deployers of low-risk AI systems. For providers of high-risk AI systems, the date when additional obligations resulting from the AI Act go into effect will be postponed significantly if the digital omnibus is adopted. 

In Germany, the Federal Network Agency (BNetzA) is expected to assume the central role in market surveillance. It already offers support for companies in the AI sector. Companies that make early preparations can minimise risks and secure competitive advantages. 

Back to top Back to top