Home / Insight / Artificial Intelligence (AI)

Artificial Intelligence (AI)

Navigating the AI opportunities, associated risks and legal landscape in the Netherlands with your trusted experts

Go to International

The Netherlands is recognized for its progressive approach to technology and innovation, and this extends to the field of Artificial Intelligence (AI). The Dutch government, along with private sectors and academic institutions, is actively involved in the development and regulation of AI technologies. The country's strategic location in Europe, combined with its strong digital infrastructure and a highly educated workforce, makes it an attractive hub for AI research and development.

Legal Framework and Regulation

The legal landscape for AI in the Netherlands is currently shaped by both national and European Union (EU) regulations. The EU has recently introduced the Artificial Intelligence Act which directly applies in the Netherlands. The AI Act aims to ensure that AI systems are safe, respect European values and laws, and uphold fundamental rights. Its impact is expected to be significant across various domains, including data protection, employment, intellectual property, mergers and acquisitions, and technology.

In addition to the newly introduced AI Act, the Netherlands continues to adhere to EU-wide initiatives such as the European strategy on AI, which aims to boost the EU's technological and industrial capacity and AI uptake across the economy, all while ensuring respect for fundamental rights.

Interdisciplinary AI team

Global and local legal experts within our interdisciplinary TMC AI team have in-depth knowledge both of the technology and of the associated legal issues. They are looking forward to support you with designing, implementing and running your business in a legally compliant and ethically responsible manner.

AI Strategy and Initiatives

The Dutch AI strategy is encapsulated in the Strategic Action Plan for Artificial Intelligence, which outlines the country's ambitions to harness the potential of AI while mitigating the risks. The plan focuses on:

  • Investing in AI research and development
  • Encouraging the use of AI across various sectors
  • Ensuring an adequate supply of AI talent
  • Addressing ethical and societal questions related to AI

The Netherlands AI Coalition (NLAIC), wherein CMS plays a key advisory role, is a public-private partnership that aims to accelerate AI developments in the country by fostering collaboration among businesses, government, educational and research institutions, and civil society organisations.

Explore more

AI and Data Protection
Navigating AI Act and GDPR: Ensuring Data Protection Compliance in AI Systems
AI and M&A
Legal Expert guidance on the impact of the AI Act on AI-driven transactions
AI and Employment
Revolutionising the Workplace: Ensuring Legal and Ethical Integration of AI
AI and Intellectual Property
Balancing Innovation and Copyright Protection
AI and Technology
Navigating the Legal Landscape

Our team

Portrait ofKatja Kranenburg - Hanspians
Katja van Kranenburg-Hanspians
Amsterdam
Portrait ofEdmon Oude Elferink
Edmon Oude Elferink
Amsterdam
Portrait ofRogier Vrey
Rogier de Vrey
Amsterdam
Portrait ofTom Jozak
Tom Jozak
Amsterdam
Portrait ofSimon Sanders
Simon Sanders
Amsterdam
Portrait ofElmer Veenman
Elmer Veenman
Amsterdam
Show more Show less

Feed

19/06/2024
AI Regulations in the EU, UK, and Asia: Essential Insights for HR and Employers
Join the members of the International CMS Employment and Pensions team for a webinar designed specifically for HR professionals, corporate leaders, and legal experts who need to stay ahead of the developments in the AI regulations across jurisdictions. Why Attend? The EU AI Act is poised to reshape the framework within which businesses operate. Understanding its implications is crucial for managing compliance and leveraging AI technology effectively within your HR practices. This webinar, led by employment law experts from CMS, will cover:Overview of the most important regulations in the EU AI Act for em­ploy­ersSpe­cif­ic impacts of the EU AI Act on roles, AI skills, AI-policies and the relationship with the works councilInsights on AI regulation in the UK and AsiaThe session will be conducted in English and is free of charge. We look forward to seeing you.
13/06/2024
The impact of the AI Act on Tech M&A due diligence
In mergers and acquisitions (M&A), due diligence is the cornerstone of a careful analysis of the commercial, legal, tax and financial aspects of a company. For this purpose, acquirers and their advisors take a deep dive into a company's key documents, contracts, books and records. In the land of Tech M&A, the rise of artificial intelligence (AI) raises the question of what is important in a due diligence investigation of AI companies. In our recent exploration of the AI Act we introduced the framework and its implications. The AI Act requires companies engaged in AI-related activities to undertake a comprehensive compliance assessment. This has implications for the focus of due diligence on such companies. The AI Act has been approved by the European Parliament, on 13 March 2024, and given the final green light by the European Council on 21 May 2024. The application of the AI Act will be staged over two years. Now that unacceptable risk AI systems will already be banned as per December 2024, it is time to zoom in on the various players distinguished in the AI Act from an M&A due diligence perspective. AI Act players & due diligence Any company that uses AI may be subject to the AI Act. Each provider, importer, distributor and deployer of AI is required to demonstrate its compliance with the AI Act in a manner unique to its own operations. This highlights the need for a tailored due diligence approach. Particular attention should be paid by companies which envisage ac­quir­ing:pro­viders placing on the market, or putting into service, AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are based within the EU or in a third coun­try;de­ploy­ers of AI systems that are based within the EU; orproviders and deployers of AI systems that run their businesses from a country outside the EU but where the output   produced by the AI system is used in the EU. This also means that the M&A due diligence must cover not only the AI systems developed or provided by the target company, but also those imported, distributed or deployed by it or its customers or partners in the EU. As the AI Act adopts a risk-based approach, the first step of any due diligence should be to determine or verify the appropriate classification of the AI systems: unacceptable risk, high risk, limited risk or minimal risk. AI systems classified as unacceptable risk are prohibited. AI systems classified as minimal risk is unregulated, and limited risk AI systems are subject only to light transparency obligations. From the perspective of the AI Act, the focus of due diligence should therefore predominantly be on high risk AI systems. Depending on the risk appetite of the acquiring company as well as potential impact on the reputation and competitiveness of the target company, the possibility of voluntary conformity assessment exists for AI systems that are not classified as high-risk, but which may nevertheless affect the rights or interests of individuals. The AI Act enlists prohibited AI practices in Chapter 2, and requirements for providers, deployers and other parties in respect of high risk AI systems in Chapter 3. We will consider the appropriate approach for each of these parties concerned below:Providers: companies involved in the development of AI systems under their own name and trademark are considered providers. Their obligations are the most extensive of all parties concerned and include the establishment of quality management systems, preparation of technical documentation, CE conformity marking and maintenance of logs. Any distributor, importer, deployer or other third party who makes a substantial modification to the high risk AI system or changes the intended purpose of an AI system and markets such AI system under its own name or trademark will be considered a provider under the AI Act. Furthermore, all documentation of providers needs to be kept for a period of 10 years. Importers: importers placing AI systems on the market under the name or trademark of companies outside the EU, must ensure that the required assessments,  documentation and CE conformity marking are in place. This should be verified as part of the due diligence. If these assessments and documentation are not in place, the importer is not allowed to place the product on the EU market and risks significant fines. Distributors: a distributor is any company in the supply chain that makes an AI system available within the EU without affecting its characteristics and is not a provider or importer. Distributors are responsible for ensuring that the AI system is accompanied by the appropriate technical documents and instructions, including the CE conformity marking, when providers or importers deliver these products to them. In addition, the distributors should verify that the provider and the importer have fulfilled their obligations under the AI Act. If not, corrective actions may be imposed, such as bringing the system into conformity or withdrawing the product. Deployers: companies that only use AI systems in their business must take appropriate technical and organisational measures to comply with the instructions given by the provider. These compliance obligations are lighter than those of the entities responsible for providing the instructions and related documentation. However, deployers should not only follow the instructions, but are also obliged to consider potential risks and notify a provider or distributor and the relevant market surveillance authority thereof. Furthermore, when notifying a provider or distributor, they should also suspend the use of a product. Deployers should also keep the logs (automatic recording of events) available, which should be reviewed as part of the due diligence. Parties using AI for  personal, non-professional activities, do not have to comply with these rules. Due diligence requires a thorough investigation to assess the compliance of all of these parties with their obligations under the AI Act. Moreover, the AI Act introduces additional obligations and restrictions on the providers and deployers of high risk AI systems that process personal data or that are used for purposes such as identification, detection, risk assessments or prediction of certain behaviour. Conclusion The introduction of the AI Act will reshape due diligence in AI-related transactions. Compliance with the AI Act is essential for all players. Step 1 of any due diligence should be to determine or verify the appropriate classification of the AI systems: unacceptable risk, high-risk, limited risk or minimal risk. As part of this first step, the acquiring companies should also develop their ‘AI literacy’: the skills and knowledge to better understand how providers and deployers are taking into account their rights and obligations under the AI Act , and to become aware of the opportunities and risks of the relevant AI systems,. For high risk AI systems, step 2 of the due diligence requires a tailored and thorough investigation to assess the target’s compliance with its obligations under the AI Act.
05/06/2024
EU adopts AI Act – key components and next steps for organisations
The EU has adopted a comprehensive regulation on artificial intelligence (AI) that aims to foster innovation, ensure trustworthiness, and protect fundamental rights. The regulation sets out harmonised...
10/11/2023
Navigating the AI Act in Tech M&A
The use of artificial intelligence (AI) in various sectors is transforming the landscape of mergers and acquisitions (M&A), requiring companies and their M&A advisors to keep up with the rapidly changing technological and regulatory environment. On 21 April 2021, the European Commission proposed the 'Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence' (AI Act), which aims to establish a regulatory framework, inter alia for the providers (including product manufacturers), users, distributors and importers of AI systems. The AI Act is expected to be formally adopted in the first half of 2024. In our last contribution, we discussed the concept of 'Tech M&A'. In this article, we will take a look at how the AI Act will affect Tech M&A. AI Act The AI Act introduces a new legal framework that classifies AI systems according to the level of risk they pose in respect of the rights and freedom of individuals. Four levels of risks are dis­tin­guished:Un­ac­cept­able risk: AI systems which are classified in the first category are prohibited by the AI Act. High risk: With respect to high-risk AI systems, the AI Act imposes requirements and obligations regarding, inter alia, (i) technical documentation; (ii) risk management systems; (iii) conformity assessment procedures; (iv) log keeping; and (v) quality management systems. Limited risk: Title IV of the AI Act concerns certain AI systems to take account of the specific risks of manipulation they pose. The transparency obligations set out therein apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content ('deep fakes'). Low or minimal risk: Lasty, the AI Act creates a framework for the creation of codes of conduct, which aim to encourage providers of non-high-risk AI systems to voluntarily apply and implement the mandatory requirements for high-risk AI systems. The task of monitoring and enforcing the provisions of the AI Act is assigned to a national supervisory authority in each Member State. In the Netherlands, the competent authority is the Personal Data Authority (Autoriteit Per­soonsgegevens). Failure to comply with the obligations and requirements laid down in the AI Act may result in a penalty. Further rules on penalties shall be determined by each European member state individually, taking into account the maximum penalties provided for specific infringements of the AI Act. For example, infringements of Article 5 (regarding prohibited AI practices) shall be subject to administrative fines of up to EUR 30,000,000 or up to 6 % of its total worldwide annual turnover for the preceding financial year. Due diligence As the AI Act will come into force in the near future, assessing the risk level(s) of the relevant AI system(s) and their compliance with the AI Act is already – and will increasingly become – an important part of the due diligence in Tech M&A transactions. In addition, depending on the role of the target company (e.g. as a provider or user of AI), it will be crucial in such due diligence investigations to assess information on the ownership of AI-generated intellectual property rights, compliance with data protection regulations and liability for AI decision-making. Transaction documentation The AI-related risks identified in the due diligence phase should be addressed in the share purchase agreement through appropriate warranties and indemnities, signing or closing conditions. These due diligence findings may also affect the valuation, negotiation and structuring of the M&A transaction. Representations and warranties: the seller should provide more specific and comprehensive representations and warranties regarding compliance with the AI Act. Signing or closing conditions: the parties may need to include more tailored conditions relating to the target company's AI systems, such as obtaining or maintaining any necessary authorizations, registrations, certifications or notifications under the AI Act in order to comply with any ongoing or reporting requirements. Furthermore – in the event of W&I-insured transactions – the parties and their insurers should adapt the scope of their due diligence, disclosure, negotiations and underwriting processes to account for AI risks and to ensure that the W&I insurance provides adequate coverage for these risks. The introduction of the AI Act and the expected development of associated national legislation may result in uncertainty regarding the (legal) risks involved. Therefore, we expect W&I insurances will be in demand by parties in Tech M&A transactions Conclusion In the dynamic landscape of Tech M&A, the development and application of AI and the introduction of the proposed AI Act are transforming M&A processes. AI-related transactions require a tailored approach at each stage of the transaction, focusing on identifying specific AI risks and incorporating such risks into the transaction documentation. The AI Act, once enacted, will impose various requirements, obligations and other aspects to be considered by all stakeholders to an M&A process. Time will tell how the Tech M&A market will respond to the development of further legislation to regulate AI systems. We consider the adoption of the AI Act a confirmation of the significant potential of AI companies and foresee a bright future for Tech M&A.
06/09/2023
Decoded: traditional M&A vs. Tech M&A
Mergers and acquisitions (M&A) are crucial to the growth and evolution of businesses across industries. M&A in its most traditional form involves companies in various sectors coming together to consolidate resources, expand market reach, and enhance overall competitiveness. However, with the rapid advancement of technology, a new paradigm has emerged: Tech M&A. Tech M&A focusses on new sorts of technology that can create great value to businesses, but creates new challenges as well. Our specialised Tech M&A team will explore the key challenges of Tech M&A in a series of articles. In this first article, we explore the key differences between traditional M&A and Tech M&A. Traditional M&A Traditional M&A refers to the consolidation of companies in various industries, such as manufacturing, and retail. In traditional M&A, the primary objectives often include achieving economies of scale, synergizing operations, gaining access to new markets, and diversifying products or services. The legal and other processes involved in these transactions are well-established and generally of a static nature, with extensive precedents and routines governing the entire process. Companies meticulously assess each other's financials, assets, liabilities, and risks. Tangible assets, such as real estate, machinery, and inventory are an important element of such assessment. Tech M&A Tech M&A, on the other hand, refers to acquisitions that involve technology driven companies. Unlike traditional M&A, where tangible assets are at the forefront, Tech M&A centres around intangible assets like intellectual property, data, software, and innovative technology such as artificial intelligence. The legal and other processes for Tech M&A transactions are therefore tailored to focus on such assets during every stage of the transaction.  Strategic value is placed on a company's proprietary technology, talent, and potential for future growth. Consequently, Tech M&A transactions are dynamic and complex, while time is of the essence. Key differences and challenges Need for alternative valuation methods Traditional M&A primarily relies on financial metrics and historical performance to determine valuation. In contrast, Tech M&A requires a keen assessment of a company's intellectual property, technology, market potential, and future growth prospects. Valuation methods like discounted cash flow and comparable analysis are still relevant, but new metrics and tools are required to evaluate the potential of a prospect accurately. Continuous technological advancements The fast-paced nature of technology requires the acquiring party to carefully consider the sustainability and scalability of the technology they are acquiring. This is far from a static assessment and may even change during the course of a transaction. When designing the process for a Tech M&A transaction, parties should consider which developments are especially relevant for the target. Furthermore, the process should provide for sufficient flexibility to address any such developments without jeopardising the overall timeline. ESG concerns around use of AI and data privacy ESG considerations have unmistakably become an important topic in any M&A transaction. Technology driven companies, often operating in dynamic and innovative environments, tend to be more re­source-ef­fi­cient and carbon-light compared to traditional companies, aligning well with ESG goals. However, the technological advancements on which such companies are based may raise concerns on their impact on society, including in respect of artificial intelligence, automation and data privacy. Rapidly changing regulatory landscape Regulatory requirements and developments are key to both traditional M&A transactions and Tech M&A transactions. Technology driven companies used to manoeuvre more often in a unregulated legal landscape than traditional companies. This landscape is changing as a result of European legislation focused on shaping Europe's digital future, including the Digital Services Act, Digital Markets Act and the EU AI Act, the world's first comprehensive AI law. The impact of such legislation on tech companies will soon become a decisive factor for Tech M&A transactions and will force these companies to regulate their businesses. Conclusion As technology continues to shape the business landscape, the differences between traditional M&A and Tech M&A become more pronounced. Actors in the Tech M&A space must adapt to the unique challenges posed by Tech M&A, including complex valuations, regulatory matters, and the fast-paced nature of the industry. Staying informed on the latest trends will be crucial to navigating the Tech M&A landscape. Stay up-to-date Stay up-to-date by subscribing to our Corporate M&A and TMC newsletters to receive future articles and event invitations about Tech M&A directly in your mailbox.