The rapid adoption of Artificial Intelligence ("AI")-tools, with a particular spotlight on the success of innovative AI models like OpenAI's ChatGPT, Google's Bard, and GitHub's Copilot has drawn many companies into investing in AI to secure a competitive edge in a newly evolving market. One of the key themes in this regard is the critical role of due diligence in assessing risks associated with AI investments. As competition for lucrative AI deals intensifies, the need for comprehensive due diligence assessments is rising, focusing on factors that could potentially impede AI machine learning and data-centric investments.
Authors
Critical considerations in an AI investment due diligence are e.g. technology evaluation, intellectual property assessment, data quality analysis, team expertise evaluation, business model scrutiny, regulatory compliance, ethical considerations, performance validation, risk management and cybersecurity measures.
In the following, we will highlight some of the most important issues to consider when embarking on an investment journey in the AI-realm:
1. Technology and intellectual property
- Evaluate the technology behind the AI product or service. While the output/service aspect of an AI-tool can look impressive from the outside, it is important to understand on which basis the AI-tool operates, e.g. on which software code/algorithm and based on which training/input data.
- Seek to understand the uniqueness and competitive advantage of the AI algorithms, models, or software. How is the AI-tool distinct from other tools resp. how does it approach problem solving in a different manner than the pre-existing tools known to you?
- Assess intellectual property protection (copyrights, patents, trade secrets) to ensure robustness and defensibility of the respective AI-tool. Be aware that copyrights are likely the most relevant intellectual property contained in artificial intelligence-software tools (patent protection for software tools is not excluded but only granted under particular circumstances under Swiss law).
Proprietary crafted software-code is commonly considered protected under copyright laws. Under Swiss doctrine, copyrights come into existence in the moment of their creation and cannot be registered (unlike e.g. patents and trademarks). It is therefore pertinent to inquire on the development process of the underlying software, the chain-of-title (who developed and assigned the rights to the target company). In this context, thought should also be given to Open Source Software ("OSS") Compliance. Does the AI-tool rely on elements of OSS? If yes, which OSS under which public license terms? Note that certain OSS-license can considerably restrict the way an OSS-based product may be re-distributed/marketed and impose additional obligations on e.g. public source code availability etc. Depending on which OSS-elements are part of the AI-tool, a legal risk assessment will lead to conclusions on the marketability of the AI-tool. Finally, be aware that a great deal of knowledge surrounding an AI-tool will also stem from know-how. Try to find out about whether such know-how is documented and whether it is adequately protected against unauthorized access and use. The better it is safeguarded, the easier it will be to enforce confidentiality of it against perpetrators in the future.
2. Data quality and Data Privacy
- Assess the quality, quantity, and diversity of input data used to train the AI system. Note that the success of most AI-tools largely depends on the input data used build and train the AI system. If the input data is already poor/validated in a week manner, the output will likely look the same.
- Understand the data sources which the AI-tool draws upon (is it proprietary data, licensed data from an official platform or publicly available data?).
- Evaluate the data collection and processing process. This is where data privacy compliance comes into play. Needless to say that AI-tools can build remarkable results based on huge quantities of data, but is the tool-owner entitled to have and use this data at all? Who owns this data and on which basis is the tool-owner legally entitled to use it? If the tool-owner is entitled to use it and such data is personally identifiable data, is it transparent how this data is used and processed at all (e.g. is there a purpose limitation of processing based on data privacy laws)? Can the tool-owner explain the data processing-purpose so that individuals are aware of it. Are there safeguards in place to exclude that the processing produces biased, wrong or incomplete output results? The compatibility of AI-projects and data privacy laws can amount to a wider examination process depending on the amount of personal data at stake (further information on this will be provided under a separate AI-Serie on data privacy compliance and AI).
3. Team and expertise
- Evaluate the team behind the AI company (experience, qualifications, expertise in AI, data science). Should you embark on investing in an AI-company, you will have to make sure that relevant key staff and know-how remains secured on board at least for a transitory period to ensure seamless transition of know-how enabling you to uphold the conduct of business in the current mode of operations.
- Look for a track record of successful AI projects or industry experience. The existing customer base, the breadth of sectors involved within the customer base, the relevant turnovers generated with customers and the absence of disputes will give you some indications on the useability and potential to commercialize the AI-tool.
- Assess the background of the team in technical, educational, research and business expertise. Where do the people come from and what is their educational background? Are they familiar with the respective industry they serve or do they come from other or entirely different fields?
4. Business model
- Assess the business model, revenue streams and relevant market of the AI-tool-offering. While some tools are deployed on premise (often for security reasons), most of them are offered platform-based. There are different commercial subscription models and terms and conditions to be reviewed in this regard. Consider also how and to which extent the AI-offeror is entitled to re-use data of its customer in order to continue feeding/training the AI-tool with new data. This option is to be regarded as a separate revenue stream, albeit not in monetary currency but in data currency which provides for further options to expand the potential of the AI-tool.
- Understand the target market, competition, and barriers to entry. AI-tools are currently spreading, but consider whether the AI-offeror has or has access to which critical data required for his long-term success and how sophisticated his tool is compared to the current rest. AI-tools live resp. thrive upon data and only with access to necessary data resource will it be possible to procure results that are of interest to users and ahead of the competitors' curve. Consider that not only open generative AI offerings (such as e.g. Chat GPT) have potential but also closed systems that are based on confidential customer-specific data, enabling customer to gain more insights than they already had before.
- Evaluate scalability, risks, and challenges in commercializing the AI technology. In this context, one pertinent risk management tool are contracts in the AI-field. Review the contract offerings between the AI-tool offeror and its customers. Does the agreement set the expectations of the customers right? Does it sufficiently carve out and limit warranties and liabilities with regard to pre-existing data inputs and future data outputs? Does it provide the AI-offeror sufficient liberty to work, process, re-process customers' data for own purposes in the long run?
5. Regulatory framework
- Understand the regulatory landscape and legal implications related to AI-offerings. As already mentioned, AI-offerings (i.e. their base of collecting and processing data) can sometimes collide with statutory requirements of data privacy laws. But more importantly, Artificial Intelligence as such is facing more regulatory scrutiny under European Artificial Intelligence Act (see our Artificial Intelligence Serie 1 with more detailed information on the requirements of this law: https://cms.law/en/ che/publication/ai-series-1-the-eu-artificial-intelligence-act-is-almost-ready-to-go ) and further existing and upcoming European AI-laws, such as (i) the EU Artificial Intelligence Liability Act, (ii) the European Digital Services Act, (iii) the European Digital Market Act and e.g. the (iv) European Data Act. Be aware that even a non-EU-based company quickly falls under the ambit of these European statutes, if it offers AI-tools (as defined in these laws) to customers in the EU or to EU-businesses who re-use it within their own products and re-sell them to customers. These laws impose a great deal of complexity as an AI-tool with have to meet regulatory requirements of all these statutes together. In the in-house-counsel lingo, the multitude of these European laws is often referred to as "stack"; a stack of statutes through which every AI-tool needs to pass the entry test, so to speak. Without going into details, the stack will require and AI-offeror to analyze and categorize the risk potential of its AI-tool, document such assessment and implement a thorough risk management system encompassing risk mitigation and cybersecurity measures. The Q&A-process should list starting point questions in this regard and make sure to evaluate the company's approach to ethical considerations like bias mitigation, transparency, fairness, and accountability.
6. Performance
- Seek evidence of the AI system's performance, accuracy, and reliability. An AI-offeror should be in a position to present customer testimonials, thus, request access to such information or independent third-party evaluations or case studies demonstrating the effectiveness of the AI solution. Also, consider pilot projects to assess the tool yourself without overcommitting.
7. Financials
- Analyze the financial health of the AI company (financial statements, revenue projections, funding history).
- Assess potential need for additional funding.
- Understand ownership structure and existing investors.
8. Partnerships
- Assess existing partnerships or collaborations with other AI-suppliers or IT-providers resp. integration with other technologies; these may open access to synergies and a broader market commercialization than if the AI-tool is offered in an isolated channel. Make sure to understand the risks and benefits of existing or future partnerships for AI-investments. In particular, a partner may provide access to a considerable amount of critical data necessary reach AI's full potential.
- Review the relevant "partner-agreements" from a legal standpoint and the risks involved (in particular commission payments and their compatibility with conflict of interest principles).
9. Risk management
- Evaluate the AI company's approach to risk management and cybersecurity. Be aware that European laws on Artificial Intelligence (see under Section 5) impose burdensome requirements on risk analysis, -categorization and risk mitigation measures (risk management) to demonstrate compliance with these laws (see further information on this e.g. under https://cms.law/en/che/publication/ai-series-1-the-eu-artificial-intelligence-act-is-almost-ready-to-go).
- Understand measures in place to protect sensitive data and ensure system integrity (in particular prevent unauthorized access). Review whether the AI-offeror has obtained a formal certification on IT-Security (e.g. ISO). Although this is not a formal legal requirement, it will provide some indication on its level of IT-security-measures and such standards are often considered persuasive by regulators or courts.
- Assess emergency plans for potential risks like system failures, security breaches, i.e. capability to ensure "business continuity". Be aware that this requirement is of high relevance for regulated customers of the AI-tool, in particular banks and insurance companies in Switzerland. Customers are also required to demonstrate mitigation measures to the regulator in Switzerland if data breaches occur and the more emergency plans/remedial measures are already in place, the better for its commercialization in the market.
10. Exit plan
- Consider potential exit options for the investment, e.g. growth, acquisition, IPO or selling your stake to a new partner or investor etc. Note that if you did your homework on the Due Diligence for your initial investment, documentation will be ready for the next exit steps envisaged.
Final conclusion
As one can see, investing in Artificial Intelligence bears many similarities to commonly known investments in the technology sector. Nevertheless, Artificial Intelligence as a product encompasses many particularities, in particular its strong dependency on vast amounts of data and detailed risk analysis and -management requirements stipulated under European AI-laws. The latter have a bearing on Swiss companies if their AI-offerings are offered to customers in the EU or if their offerings are integrated into products used or offered in the EU. If and to the extent that an AI-tool depends on the use of personally identifiable data, a considerable amount of due diligence efforts should also be devoted to assessing data privacy compliance of the entire AI-tool's input- and output-process and ethical safeguarding mechanisms (e.g. to avoid compromise of data and/or replication of bias in the data-output procedure). The checklist in this AI-serie provides a rough guideline on how to approach such investments. Ideally, the due diligence process should be accompanied by legal experts in the fields of information technology (IT) and intellectual property (IP).
Newsletter
Stay informed by subscribing to our newsletter.