AI update for employers: Draft bill of the AI Act Implementing Act
Key contacts
The draft bill of the AI Act Implementing Act provides valuable guidelines for implementing the AI Act in Germany.
The European AI Act (Regulation (EU) 2024/1689 – AI Act) came into force on 1 August 2024. Although the central obligations of the AI Act will not take effect until 2 August 2026 (e.g. for new high-risk AI systems and GPAI models), EU Member States were already required to designate the supervisory authorities as provided for in the AI Act by August 2025. After some delay, a comprehensive concept for national implementation is now also available in Germany with the draft bill for an act to implement the AI Act dated 4 August 2025 (draft bill of the AI Act Implementing Act). We have summarised the main content of the draft from the employer's perspective for you below.
Practical note: One of the AI Act's key obligations for employers has already been in force since 2 February 2025: Under Article 4 of the AI Act, providers and operators of AI systems are required to ensure that their workforce has a sufficient level of AI literacy.
AI in the workplace falls under the jurisdiction of the German Federal Network Agency (BNetzA)
The draft bill of the AI Act Implementing Act designates the BNetzA as the general market surveillance authority. This authority is responsible for all areas that are not subject to technical supervision under special legislation (e.g. the German Federal Motor Transport Authority in the area of road traffic). This general responsibility also covers the use of AI in the workplace. The explanatory memorandum on the legislation expressly clarifies that the BNetzA's area of responsibility includes the areas of "AI at the workplace and in educational institutions" listed in Annex III of the AI Act. It is also envisaged that the BNetzA will serve as the central point of contact within the meaning of Article 70 (2) AI Act (Article 6 draft bill of the AI Market Surveillance and Innovation Promotion Act). It is thus intended to act as the central German point of contact for all questions relating to the application of the AI Act at Member State and Union level and as an interface with the EU AI Office.
The designation of the BNetzA as the market supervisory authority comes as no surprise. Even before the statutory provisions were laid down in the draft bill of the AI Act Implementing Act, the BNetzA had already created specific structures to position itself as a central AI hub. Since spring 2025 at the latest, it has been acting as the primary point of contact for business, administration and science on issues relating to AI regulation. For example, in June 2025, the BNetzA published a guidance paper on the AI literacy required in Article 4 of the AI Act, and in July 2025, it set up an AI Service Desk to serve as a low-threshold point of contact for regulatory issues in the context of AI.
BNetzA to receive extensive enforcement powers
In order to fulfil its function as a market surveillance authority, the BNetzA is being granted extensive powers to monitor and enforce the requirements of the AI Act (section 11 draft bill of the AI Act Implementing Act). For example, the authority is being empowered to request the submission of relevant documents, technical specifications or information on the conformity of AI systems. It is also being authorised to ask economic operators to take measures to stop the unauthorised use of AI. In addition, the BNetzA can, if necessary, take all necessary measures to end the unauthorised use of AI or associated risks.
For the national implementation of fines, it is stipulated that infringements within the meaning of Article 99 (3) to (5) AI Act are to be classified as administrative offences (section 15 draft bill of the AI Act Implementing Act). As the market surveillance authority, the BNetzA is responsible for prosecuting and punishing infringements of the AI Act in the workplace. It should also be noted that section 30 German Administrative Offences Act (OWiG) does not apply under the draft bill of the AI Act Implementing Act, with the result that legal entities can be fined directly. In addition, the public prosecutor's office may only discontinue investigations within the meaning of section 69 (4) sentence 2 OWiG with the consent of the market surveillance authority.
Practical note: From an employer's perspective, the draft bill now provides for the BNetzA to be a clear government contact for regulatory issues relating to the use of AI systems in the workplace (e.g. recruiting, HR administration or employee monitoring). At the same time, however, the BNetzA is also being established as a supervisory authority with extensive monitoring and enforcement powers.
Whistleblower protection extended to violations of the AI Act
In addition, an extension of the German Whistleblower Protection Act (HinSchG) is planned (Article 2 draft bill of the AI Act Implementing Act): The material scope in section 2 (1) HinSchG should in future also explicitly include violations of the AI Act as reportable facts. With this extension, the German legislature is taking into account Article 87 AI Act, which provides for the reportability of violations of the AI Act. For employers, this means that internal reporting offices and processes must be expanded to include AI-specific case types (e.g. unauthorised practices, shortcomings in high-risk systems, transparency violations). This expansion should be seen as an opportunity for employers. Transparent and low-threshold internal reporting channels can reduce the risk of external reports, for example to the BNetzA, about alleged violations of the AI Act and the associated burdens (official audits, reputational damage, etc.).
AI Act and social secrecy
Another relevant aspect for employers is the amendment to the social secrecy provision contained in the draft bill. The provisions of the AI Act should also apply with regard to social data (Article 3 draft bill of the AI Act Implementing Act). To this end, section 35 (2) German Social Code I (SGB I) should stipulate that the processing of social data is conclusively regulated by the Social Codes, insofar as the GDPR and now also the AI Act do not contain any directly applicable protection provisions.
Practical note: This extension of section 35 (2) German Social Code I (SGB I) emphasises that when processing personal social data, the provisions of the GDPR and the AI Act must be observed and complied with in parallel.
Involvement of German Federal Institute for Occupational Safety and Health
The German Federal Institute for Occupational Safety and Health (BAuA) is to assume a central coordination role in state supervision at Union level (section 7 draft bill of the AI Act Implementing Act). According to the draft, the competent market surveillance authority must notify the Commission and other Member States without undue delay via the BAuA of any incidents relevant to the Union within the meaning of Articles 79 and 81 of the AI Act. The background to this allocation of responsibilities is practical in nature: The BAuA is to fulfil this role, as it already handles such notifications for many pieces of Union legislation.
Consequences for employers: greater legal certainty, but also higher compliance requirements
The draft bill of the AI Act Implementing Act provides both clarity and additional compliance requirements for employers. According to the draft, the BNetzA will now become a specific government contact point, but at the same time also a supervisory authority with serious enforcement powers. In addition, whistleblower and social data protection will be supplemented by requirements for compliance with the AI Act.
This increases the pressure on employers to align their internal compliance structures with the requirements of the AI Act and to carefully manage their next steps in the operational implementation of AI systems, including from a compliance perspective. Employers who carry out inventories, governance measures and training at an early stage on the basis of an overall concept not only protect themselves against official audits, but also strengthen the confidence of their workforce and the public in the responsible use of AI.