Home / Veröffentlichungen

Veröffentlichungen

Entdecken Sie zukunftsweisende Entwicklungen und juristische Einblicke von unseren Rechtsexpert:innnen aus allen Bereichen von CMS. Unsere Expert Guides sind von CMS Anwält:innen aus allen Jurisdiktionen verfasst, in welchen wir tätig sind. Diese können Sie sowohl online als auch offline lesen. Sie erhalten dort umfassende juristische Recherchen und Einschätzungen. Unsere Law-Now-Artikel bieten Ihnen rechtliche Analysen, Kommentare und Erkenntnisse, die Ihnen helfen, zukünftige Herausforderungen zu meistern.



Medienformate
Tätigkeitsbereiche
13/03/2024
General purpose AI models and measures in support of innovation
General purpose AI models (Currently Title VIIIA, Art. 52a-52e)The AI Act is founded on a risk based approach. This regulation, intended to be durable, initially wasn’t associated to the characteristics of any particular model or system, but to the risk associated with its intended use. This was the approach when the proposal of the AI Act was drafted and adopted by the European Commission on 22 April, 2021, when the proposal was discussed at the  Council of the European Union on 6 December, 2022. However, after the great global and historical success of generative AI tools in the months following the Commission’s proposal, the idea of regulating AI focusing only on its intended use seemed then insufficient. Then, in the 14 June 2023 draft, the concept of “foundation models” (much broader than generative AI) was introduced with associated regulation. During the negotiations in December 2023, some additional proposals were introduced regarding “very capable foundation models” and “general purpose AI systems built on foundation models and used at scale”. In the final version of the AI Act, there is no reference to “foundation models”, and instead the concept of “general purpose AI models and systems” was adopted. General Purpose AI models (Arts. 52a to 52e) are distinguished from general purpose AI systems (Arts. 28 and 63a). The General Purpose AI systems are based on General Purpose AI models: “when a general purpose AI model is integrated into or forms part of an AI system, this system should be considered a general purpose AI system” if it has the capability to serve a variety of purposes (Recital 60d). And, of course, General Purpose AI models are the result of the operation of AI systems that created them.“General purpose AI model” is defined in Article 3.44b as “an AI model (…) that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. The definition lacks quality (a model is “general purpose” if it “displays ge­ne­ra­li­ty”1Re­ci­tal 60b contributes to clarify the concept saying that “generality” means the use of at least a billion of parameters, when the training of the model uses “a large amount of data using self-supervision at scale”. footnote) and has a remarkable capacity for expansion. Large generative AI models are an example of General Purpose AI models (Recital 60c). The obligations imposed to providers of General Purpose AI models are limited, provided that they don’t have systemic risk. Such obligations include (Art. 52c) (i) to draw up and keep up-to-date technical documentation (as described in Annex IXa) available to the national competent authorities, as well as to providers of AI systems who intend to integrate the General Purpose AI system in their AI systems, and (ii) to take some measures in order to respect EU copyright legislation, namely to put in place a policy to identify reservations of rights and to make publicly available a sufficiently detailed summary about the content used. Furthermore, they should have an authorised representative in the EU (Art. 52ca). The most important obligations are imposed in Article 52d to providers of General Purpose AI models with systemic risk. The definition of AI models with systemic risk is established in Article 52a in too broad and unsatisfactory terms: “high impact capabilities”. Fortunately, there is a presumption in Article 52a.2 that helps: “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25”. The main additional obligations imposed to General Purpose AI models with systemic risks are (i) to perform model evaluation (including adversarial testing), (ii) to assess and mitigate systemic risks at EU level, (iii), to document and report serious incidents and corrective measures, and (iv) to ensure an adequate level of cybersecurity. Finally, an “AI system” is “an AI system which is based on a General Purpose AI model, that has the capacity to serve a variety of purposes” (Art. 3.44e). If General Purpose AI systems can be used directly by deployers for at least one purpose that is classified as high-risk (Art. 57a and Art. 63a), an evaluation of compliance will need to be done.
12/03/2024
Prohibited AI practices and high-risk AI systems
Prohibited Artificial Intelligence practices (Currently Title II, Art. 5) 1. Introduction to the unacceptable risk category Article 5 categorises certain AI technologies as posing an “unacceptable risk” (Unacceptable Risk). Unlike other risk categories outlined in the AI Act, the use of AI technologies that fall within this category is strictly prohibited ("Prohibited AI Systems"). It is therefore necessary to distinguish between:those technologies that are clearly prohibited; andthose AI applications that are not clearly prohibited but may involve similar risks. The most challenging problem in practice is to ensure that activities, which are not prohibited, do not become Unacceptable Risk activities and therefore prohibited. 2. Unacceptable Risk: Prohibited AI practices Article 5 explicitly bans harmful AI practices: The first prohibition under Article 5 addresses systems that manipulate individuals or exploit their vulnerabilities, leading to physical or psychological harm. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:AI systems designed to deceive, coerce or influence human behaviour in harmful ways; andAI tools that prey on an individual’s weaknesses, exacerbating their vulnerabilities. The second prohibition covers AI systems that exploit these vulnerabilities, even if harm is not immediate. Examples include:AI tools that compromise user privacy by collecting sensitive data without consent; andAI algorithms that perpetuate bias or discrimination against certain groups. The third prohibition focuses on the use of AI for social scoring. Social scoring systems assign scores to individuals based on their behaviour, affecting access to services, employment or other opportunities. Prohibited practices in­clude:AI-dri­ven scoring mechanisms that lack transparency, fairness or accountability; andSystems that discriminate based on protected characteristics (e.g. race, gender, religion). The fourth prohibition covers biometric real-time identification in publicly accessible spaces for law enforcement purposes. This includes:AI systems that identify individuals without their knowledge or consent; andContinuous monitoring of people’s movements using biometric data. 3. Clearly listed: Best practices and compliance Transparency and accountability are essential in complying with the prohibitions under Article 5. Firms using AI must design and continuously test systems, be transparent about their intensions and avoid manipulative practices. They should also disclose AI systems functionality, data usage, and decision-making processes. Companies should conduct thorough impact assessments to identify unintended vulnerabilities and implement specific safeguards to prevent exploitation. This should form part of assessments of AI systems to understand their impact on individuals and society. Companies should develop clear guidelines for scoring systems to prevent the development of social scoring characteristics, and prioritise ethical design, fairness and non-dis­cri­mi­na­ti­on. Privacy impact assessments should be pursued to ensure compliance with the various prohibitions. In particular, firms should be very careful using any real-time identification systems. In all cases, companies should maintain comprehensive records of AI system design, training, and deployment. Any critical decision made by AI systems should be overseen by a human. 4. Not clearly listed: Categorisation Unacceptable Risk AI systems cover systems that are deemed inherently harmful and are considered a threat to human safety, livelihoods, and rights In contrast, high-risk AI systems cover systems designed to be applied to specific use cases, including using AI for hiring and recruitment that may cause harm but are not inherently harmful. High risk AI systems are legal, but subject to important requirements under the AI Act. It is therefore crucial to determine the difference between high risk and unacceptable risk AI systems. In essence, any high risk activity can escalate to Unacceptable Risk under the following cir­cum­s­tances:Bi­as and Discrimination: if AI perpetuates bias or discriminates against protected groups. Privacy Violations: when AI systems compromise user privacy or misuse sensitive data. Psychological Harm: if AI manipulates individuals, causing psychological distress. AI systems that are able to perform generally applicable functions and are able to have multiple intended and unintended purposes (being General Purpose AI models) are not inherently prohibited under the AI Act, but must be used with care since in certain scenarios they lead to Unacceptable Risk activities. To assess whether a General Purpose AI Model poses an Unacceptable Risk, it is necessary to consider the context in which the model operates. If it influences critical decisions (e.g. hiring, credit scoring), perpetuates bias or discriminates, compromises user privacy (e.g. by collecting sensitive data without consent), the risk increases, and the model may need to be adapted. 5. Best practice and compliance While the AI Act provides examples of explicit prohibitions under the AI Act, it cannot cover all possible situations as the technology is, through updated versions and by definition, constantly evolving. As a guide, legal and compliance teams should ask the following questions when considering high- risk AI systems:Risk assessment:What is the evidence that the categorisation of the AI application is minimal, limited, high or Unacceptable Risk?Does the application in any circumstances use or act on sensitive data or influence critical de­cis­i­ons?Con­tex­tu­al analysis:Does the application operate in a sector that has a presumption of increased risk, for example, (a) financial services, or (b) healthcare?In what ways does the deployment of the application impact (a) individuals, and (b) society?Specific criteria:Can any decisions of the application be considered to give rise to manipulation, exploitation, discriminatory scoring, or biometric iden­ti­fi­ca­ti­on?Does the application operate or have access to data that could give rise to the exploitation of subliminal techniques or vulnerabilities related to protracted characteristics, such as age or di­sa­bi­li­ty?Trans­pa­ren­cy and Documentation:In what ways is the AI system transparent about its inherent functioning and de­cis­i­on-ma­king?In what ways does the user’s documentation of the design, training and deployment of the application demonstrate compliance with the various rules? 6. Conclusion Unacceptable Risk AI activities are those practices that pose inherent harm to people and are strictly forbidden under the AI Act. The potential for reputational damage and regulatory sanctions serve as strong deterrents for firms to avoid breaching these provisions of the AI Act. It is essential for companies to take proactive measures to ensure compliance and prevent harm to individuals and society.
11/03/2024
Looking ahead to the EU AI Act
Introduction The European Union is preparing for the imminent adoption of the world’s most significant legislation on Artificial Intelligence, solidifying its position as a pioneer among global legislators. This initiative aims to establish and reinforce the EU’s role as a premier hub for AI while ensuring that AI development remains focused on human-centered and trustworthy principles. To expedite the achievement of these goals, on 8 December 2023, after three days of debate, the European Parliament and the Council of the European Union finally reached a provisional agreement on the “Proposal for a Regulation laying down harmonised rules on artificial intelligence” (the so-called AI Act), which aims to ensure that AI systems placed on the European market are safe and respect the fundamental rights and values of the EU. Subsequent to this provisional agreement, technical refinement of the AI Act continued to finalise the regulation’s details and text. The final vote of the European Parliament on the AI Act will take place at 13 March 2024. Since the European Parliament's Committees on the Internal Market and Consumer Protection (IMCO) and on Civil Liberties, Justice and Home Affairs (LIBE) have endorsed overwhelmingly the proposed text, the approval of the European Parliament can be expected. After a long and complex journey that began in 2021 with the European Commission’s proposal of a draft AI Act, this new regulation is expected to be passed into law in spring 2024, once it has been approved by the European Parliament and the Council of the European Union . The AI Act aims to ensure that the marketing and use of AI systems and their outputs in the EU are consistent with fundamental rights under EU law, such as privacy, democracy, the rule of law and environmental sustainability. Adopting a dual approach, it outright prohibits AI systems deemed to pose unacceptable risks while imposing regulatory obligations on other AI systems and their outputs. The new regulation, which also aims to strike a fair balance between innovation and the protection of individuals, not only makes Europe a world leader in the regulation of this new technology, but also endeavours to create a legal framework that users of AI technologies will be able to comply with in order to make the most of this significant development opportunity. In this article we provide a first overview of the key points contained in the text of the AI Act1This article (including the relevant citations below) is based on the latest draft available on the Council’s website. The AI Act remains subject to possible further refinement, but not as regards content, and the text referred to for this article should be considered as the closest to the one that will be voted on by the EU Parliament. footnote that companies should be aware of in order to prepare for the implementing regulation.
11/03/2024
FAQs zur Flexiblen Ka­pi­tal­ge­sell­schaft (FlexKapG)
1. Was ist die Flexible Ka­pi­tal­ge­sell­schaft (FlexKapG) bzw. FlexCo? Die Flexible Ka­pi­tal­ge­sell­schaft, auch FlexKapG oder FlexCo genannt, ist eine neue Rechtsform für österreichische Unternehmen, die ab 01.01.2024 gegründet werden kann. Als Ka­pi­tal­ge­sell­schaft haften Gesellschafter grundsätzlich unbeschränkt für die Aufbringung des Stammkapitals und für die Einhaltung der Gläu­bi­ger­schutz­be­stim­mun­gen. Die Rechtsform wurde speziell entwickelt, um den Bedürfnissen von Start-ups und Neu­grün­der:in­nen, aber auch von etablierten KMU und Großunternehmen gerecht zu werden.2. Wie unterscheidet sich das Stammkapital der FlexKapG von dem einer herkömmlichen GmbH? Die Ka­pi­tal­erfor­der­nis­se einer FlexKapG und einer GmbH sind gleichhoch. Das Kapital hat mindestens EUR 10.000.- zu betragen. Davon sind mindestens EUR 5.000,- bei der Gründung bar einzuzahlen, sofern nicht Sacheinlagen eingebracht werden. Der kleinste Ge­sell­schafts­an­teil bei einer FlexKapG hat mindestens EUR 1,-, bei einer GmbH mindestens EUR 70,- zu betragen.3. Welche Erleichterungen gibt es bei schriftlichen Beschlüssen in der FlexKapG? Die Möglichkeit, schriftliche Um­lauf­be­schlüs­se zu fassen, wurde wesentlich vereinfacht. Der Ge­sell­schafts­ver­trag kann nun vorsehen, dass eine schriftliche Stimmabgabe auch ohne Zustimmung aller Gesellschafter möglich ist, einschließlich der schriftlichen Beschlussfassung per E-Mail.4. Welche Besonderheiten gibt es bei den Ge­schäfts­an­tei­len in der FlexKapG im Vergleich zu herkömmlichen GmbHs? Die gesetzlichen Be­schluss­ein­hei­ten sind bei GmbH und FlexKapG gleich. Virtuelle Ge­ne­ral­ver­samm­lun­gen als Videokonferenzen sind zu­läs­sig. Ge­sell­schaf­ter:in­nen der FlexKapG können bei der Ausübung ihres Stimmrechts ihre Stimmen uneinheitlich abgeben; dies ist vor allem für Treuhänder wichtig.5. Was sind Un­ter­neh­mens­wert-An­tei­le und wie unterscheiden sie sich von klassischen Ge­schäfts­an­tei­len? Das FlexKapGG ermöglicht die Ausgabe von Un­ter­neh­mens­wert-An­tei­len in Höhe von bis zu 24,999% des Stammkapitals. Diese Anteile gewähren einen Anspruch auf den Bilanzgewinn und den Li­qui­da­ti­ons­er­lös, jedoch kein Stimmrecht und kein Bezugsrecht bei Ka­pi­tal­erhö­hun­gen. Dafür gewähren sie beim Exit, wenn die Grün­dungs­ge­sell­schaf­ter ihre Anteile mehrheitlich verkaufen, ein Mitverkaufsrecht zu gleichen Konditionen. Un­ter­neh­mens­wert-An­tei­le sind leicht übertragbar und insbesondere für Mit­ar­bei­ter­be­tei­li­gun­gen und Finanzinvestoren attraktiv.6. Welche Vorteile bietet das FlexKapGG beim Rückkauf von Un­ter­neh­mens­an­tei­len? Das FlexKapGG ermöglicht den Rückerwerb von Un­ter­neh­mens­wert-An­tei­len. Diese können für zukünftige Übertragungen zeitlich beschränkt vorgehalten werden. Es können auch vertragliche Rück­ver­kaufs­rech­te für Un­ter­neh­mens­wert-An­tei­le vereinbart werden.7. Wie unterscheidet sich die Kapitalerhöhung bei der FlexKapG von der bei der herkömmlichen GmbH? Die FlexKapG gibt es nicht nur die „ordentliche Ka­pi­tal­erhö­hung“ (wie bei der GmbH). Bei der FlexKapG können der Ge­sell­schafts­ver­trag und die Ge­ne­ral­ver­samm­lung die Ge­schäfts­füh­rung ermächtigen, innerhalb von 5 Jahren eine Kapitalerhöhung durchzuführen (sogenanntes genehmigtes Kapital“). Daneben kann auch Vorsorge für Options- und Wandlungsrecht auf Ge­schäfts­an­tei­le und un­ter­neh­mens­wert-An­tei­le durch eine „bedingte Ka­pi­tal­erhö­hung“ getroffen werden, und für Anteils- und Wand­lungs­op­tio­nen können beide Formen kombiniert werden („genehmigtes bedingtes Kapital“). Dies alles gibt der Ge­schäfts­füh­rung mehr Hand­lungs­spiel­raum und Flexibilität.8. Welche Fi­nan­zie­rungs­in­stru­men­te kann die FlexKapG ausgeben? Das FlexKapGG ermöglicht die Ausgabe von Fi­nan­zie­rungs­in­stru­men­ten wie Wandel- und Ge­winn­schuld­ver­schrei­bun­gen, Op­ti­ons­schuld­ver­schrei­bun­gen, Wandeldarlehen und Genussrechte, die bisher in der GmbH etabliert waren.9. Welche Möglichkeiten der Ka­pi­tal­her­ab­set­zung sieht das FlexKapGG vor? Wenn der Ge­sell­schafts­ver­trag dies regelt, können flexible Ka­pi­tal­ge­sell­schaf­ten ihr Kapital durch Einziehung von Ge­schäfts­an­tei­len herabsetzen, ohne dass es eines vorherigen Erwerbs durch die Gesellschaft bedarf. Dies bietet die rechtliche Grundlage für ge­sell­schafts­ver­trag­li­che Austritts- und Aus­schluss­rech­te. Vor­aus­set­zung ist allerdings, dass der Erwerb aus freien Rücklagen möglich ist, sonst nur, wenn eine ordentliche Ka­pi­tal­her­ab­set­zung mit Gläubigeraufruf erfolgt.10. Wer profitiert am meisten vom FlexKapGG? Das FlexKapGG richtet sich nicht nur an Neu­grün­der:in­nen und Start-ups, sondern bietet auch bestehenden Unternehmen attraktive Vorteile, insbesondere bei der Gründung von Toch­ter­ge­sell­schaf­ten und der Umwandlung bestehender GmbHs in eine flexible Ka­pi­tal­ge­sell­schaft.
11/03/2024
Flexible Ka­pi­tal­ge­sell­schafts-Ge­setz - Un­ter­neh­mens­wert-An­tei­le
Un­ter­neh­mens­wert-An­tei­le
11/03/2024
Flexible Ka­pi­tal­ge­sell­schafts-Ge­setz - Überblick
Überblick
11/03/2024
Flexible Ka­pi­tal­ge­sell­schaft | Publikationen von Johannes Reich-Rohrwig
Publikationen von Johannes Reich-Rohrwig
06/03/2024
Close the Gap – Wie die Lohn­trans­pa­renz­richt­li­nie die HR-Arbeit verändert
CMS NewsMonitor Employment Law | Folge 31
05/03/2024
CMS Tax News | Keine Gebührenpflicht für Ho­tel­pacht­ver­trä­ge
CMS Tax News | März 2024
05/03/2024
CMS Employment Snack | Close the Gap – wie die Lohn­trans­pa­renz­richt­li­nie...
Veröffentlicht am 05.03.2024Der Gender Pay Gap reduziert sich nach wie vor nur langsam. Mit der Lohn­trans­pa­renz­richt­li­nie soll das Tempo beschleunigt werden. Dies ist mit rechtlich bedeutenden Änderungen und Fallstricken für die tägliche HR-Arbeit ver­bun­den. Un­se­re Arbeitsrecht- und Gleich­be­hand­lungs­exper­tin­nen Andrea Potz und Daniela Krömer sprechen gemeinsam mit Christoph Wolf über beschränkte Fragerechte in Be­wer­bungs­pro­zes­sen, klare Konsequenzen bei fehlenden Entgeltberichten und die rechtlichen Konsequenzen, wenn der Gender Pay Gap im Unternehmen nicht kleiner wird.  Themen   Auskunfts- und Fragerechte in Be­wer­bungs­pro­zes­senIn­for­ma­ti­ons­pflich­ten während dem laufenden Ar­beits­ver­hält­nis(feh­len­de) Entgeltberichte und ihre rechtlichen Fol­gen­So­zi­al­part­ner­schaft­li­che Ent­gelt­be­wer­tung­Rechts­durch­set­zung und Strafen
21/02/2024
Der Betriebsrat als Verantwortlicher im Sinne der DSGVO
CMS NewsMonitor Employment Law | Folge 30
06/02/2024
Keine Verjährung von jahrelang angesammelten Urlauben?
CMS NewsMonitor Employment Law  | Folge 29