Home / Publications / AI Series 4: Artificial Intelligence in the field...

AI Series 4: Artificial Intelligence in the field of litigation & arbitration: What impact will it have?

Artificial Intelligence (AI) has not only paved the way for IT-savvy fields of practice. It is also likely to revolutionize aspects of litigation, in particular the way litigation procedures can be executed and the variety of disputes that are likely to evolve from the AI-realm. In the following, we will try to highlight some of the most noteworthy trends and developments. On the one hand, artificial intelligence-enhanced litigation methods are visible on the nearer horizon (but without full accuracy confirmed yet), on the other hand, a growing risk of AI-related-litigation becomes apparent which requires an early reflection and response for businesses. 

1.      Artificial Intelligence-enhanced litigation

 

  • Virtual legal assistance within the judiciary system: Virtual legal assistants play a relevant role in legal operations. AI-powered virtual assistants could improve in the next years and many administrative tasks such as scheduling, document assembly, filing could become increasingly handled via machines. Consequently, human attorneys or judges might interact more with such assistants in ways how they interact with assistants today already;
  •  Judicial customization of work procedures: Todays judges and attorneys must read lengthy pleadings with highly variable content. It is imaginable that use of AI-Tools can or will be used to accelerate or at least improve the reading and processing experience. Moreover, judges might have preferences on how court documents should be structured and suggest their preferences through an AI-based model, e.g. templates or suggestions of how their "wish-pleadings" look like (e.g. the Swiss employment courts have comprehensive pleading templates for potential employees to file their claims). Whether this will occur and to which extent is difficult to predict. Judges might not always be willing to create transparency on their preferences for pleadings in advance or even open doors to a potential profiling. Also, judges have little rewards in processing more cases. Thus, it might even be more beneficial for a judge to receive pleadings poorly drafted that he can reject and then close a case (because it doesn't meet the legal requirements);
  • AI-enhanced judicial logic: Depending on the complexity of cases, it would be likely unethical for judges and conflicting with statutory and constitutional principles to incorporate AI analytics into the adjudication process. Algorithms drawing upon billions of data could help fact analysis or to craft a set of compelling legal arguments which a judge would be likely to review and refine. Current procedural laws in Switzerland enshrine the principle that a judge must reason, craft and deliver his decisions in general by himself (non-lawful delegation of this personal task to third parties is often termed as the so called "demission du juge", a type of behavior not permitted and can be legally challenged). Nonetheless, it is a salient reality that non-complex, mass-enforcement adjudication procedures (such as e.g. enforcing parking fines) is already conducted in an automated manner (one receives an e-mail/sms from a policiary authority explaining the cause of action/parking violation, offering the option to klick "I accept" and automatic delivery of a fine, payment option via TWINT to pay the fine and close the case) and it is likely that AI-enhanced judicial decisions could increase in low-complexity, mass-enforcement areas of the law.
  • Emotion and Sentiment Analysis: This sounds creepy and far-fetched. But could an AI-based tool review whether the judge sighs or whether another member of the court is frowning? Could we review a witness' eye and infer whether he is nervous? What if an AI-tool could analyze such sentiments/moods?  Would this empower lawyers to adapt tactics during a court trial? While resistance on the introduction of such (surveillance-like, lie-detector-like) features are to be expected, it is fair to say that human participants in court proceedings have always been observing moods and human behavior (especially in settlement talks) and shaped their tactics to some degree. The doubtful element on emotion and sentiment analysis probably lies in the question of accuracy of analyzed sentiments based on only external (merely superficial) appearances of human participants in a very short time period. It could well be that human participants will continue to still trust more their own instincts at the moment than an AI-tool. However, such AI-tool could still produce hints to a participant, e.g. a notice to an attorney "the judge is yawning for the 10th time, he seems uncomfortable and appears not to know your written brief well enough". An attorney could then e.g. draw own conclusions by himself, for instance to re-plead more than he planned to make a more compelling argument to the judge. Or an AI-Tool could suggest that a judge's one-sided remarks reveal a bias towards one witness or a particular line of reasoning. An attorney provided with real-time feedback from an AI-tool could then shape his strategy of arguing in court, but always provided that such tools are reliable, accurate and sufficiently fast.

While all the above said is still music of the future and not all of these tools will be of relevance in the nearest future, we believe that gradually, some of these features will enter into the judicial system and could raise new questions on risks of AI-deployment and compatibility with statutory and constitutional requirements of the judicial system.


New Media & Technologies
Changing business models through new technologies

2.      Growing Artificial Intelligence litigation risks

The risk businesses developing or using AI-tools face from litigation is changing as AI not only provides for opportunities but also risks associated giving rise to liability. Risks may arise with the data input such system uses, with the use of an AI-system itself and from the systems' later output. Business should determine whether they need to build additional safeguards into their AI-systems. In the following, we will showcase a few noteworthy litigation cases that occurred around the globe and which might affect your business in the future as well.

  • Intellectual Property Disputes: One of the most well-established litigation scenarios is based on allegations that the use of data by AI-systems violates pre-existing intellectual property rights. Various litigation cases on this subject are pending worldwide (see more information e.g. under https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/). One of the most prevalent cases is the lawsuit pending in the US by three artists who filed a class action lawsuit against Stability AI and Midjourney alleging that the use of their work infringed copyright and other laws resp. threatens to put artists out of a job (see more information on this case e.g. under https://www.theartnewspaper.com/2024/05/10/-deviantart-midjourneystablediffusion-artificial-intelligence-image-generators). Another ruling by the High Court in London in 2024 dealt with the patentability of trained "artificial neural networks" (see e.g. https://aibusiness.com/responsible-ai/a-uk-high-court-just-ruled-ai-can-be-patented). Furthermore, a number of courts have considered whether AI systems can be named as an inventor under patent laws or not (see e.g. https://www.cnbc.com/2023/12/20/ai-cannot-be-named-as-an-inventor-top-uk-court-says-in-patent-dispute.html). Further insight on such cases will be given under a separate publication to follow and an envisaged AI-litigation-tracker-tool we deem to further update.
  • Automated decision-making Disputes: In the Netherlands, the Amsterdam Court of Appeal has considered whether drivers have a right to access their personal data and the use of automated decisions-making processes including profiling from Uber and Ola. In a series of decisions, the court held that information must be disclosed and that certain automated decision-making processes such as creating fraud probability scores and the batched matching system (which links Uber drivers to passengers) has a considerable effect on drivers and thus the companies were held to provide a basic explanation of its algorithms, which factors were assessed and how they were weighted to reach a decision about the average ratings of drivers (see further information under https://www.sector-plandls.nl/wordpress/blog/the-ola-uber-judgments/).
  • Breach of contract Disputes: Although this case has been settled before trial, in the case of Tyndaris SAM v. MMWWVWM ("VWM"), a contractual claim was raised that an AI-system had operated in a wrong manner (i.e. deviating from what was promised). Tyndaris has developed an AI system used as a supercomputer to apply AI to real-time news and social media in order to identify investment trends without human intervention. Tyndaris and VWM entered into a contract in which VWM invested into an account managed by Tyndaris using this said AI system. In a payment dispute, among other allegations, VWM alleged that the system had not been sufficiently tested or properly designed or analyzed by professionals experienced in systematic trading and that the AI system did not enter the market at the best times of the day so that VWM had missed out on beneficial investments. This case showcases the need to mitigate contractual risks when developing and offering AI systems with specific clauses about the level of testing an the limitation of warranties or liability clauses (for more information see http://dispute-resolutionblog.practicallaw.com/ai-powered-investments-who-if-anyone-is-liablewhen-it-goes-wrong-tyndaris-v-vwm).
  • Human rights claims and discrimination Disputes: In the United Kingdom, the Court of Appeal considered human rights in the context of facial recognition systems powered by AI. In the case of R (Bridges) v. Chief Constable of South Wales Police and others in 2020, the Court of Appeal held that use of automated facial recognition technology was not applies "in accordance with the law" in particular with the right to respect for private and family life under the European Convention of Human Rights. The court censured the police force over its data protection impact assessment and stated that no reasonable steps had been taken to avoid the technology had a racial or gender bias (for more information see https://www.libertyhumanrights.org.uk/issue/legal-challenge-ed-brid-ges-v-south-wales-police/). In another noteworthy case, the State of Michigan announced that it has reached a 20 million settlement to resolve a class action lawsuit according to which the state's unemployment insurance agency used an automated adjudication system to falsely accuse recipients of alleged fraud resulting in the seizure of their property without due process. The Michigan Integrated Data Automated System (MiDAS) was used to automatically detect and adjudicate suspected frauds. Unfortunately, the system by fault of wrong algorithms started to accuse unemployment applicants falsely and without or too little human oversight (for more information see e.g. https://michiganadvance.com/briefs/thousands-of-michiganders-falsely-accused-of unemployment-fraud-get-20m-settlement/).
  • Product liability claim Disputes: Product liability disputes have been around in the realm of software disputes for a longer time. This stems from the fact that software (although not qualifiable as a tangible "product" under Swiss product liability statutes in an isolated manner) can in certain constellations (in particular, if integrated in wholistic IT-systems including hard- and software) qualify as a product and be subject to Swiss product liability statutes. In the context of AI, additional developments are on the near horizon as soon, the new European AI liability directive will enter into force alongside the general product liability regime. Under this directive, there will be rebuttable presumptions to make it easier for individuals seeking compensation due to harms caused by AI (and to reach the required burden of proof). The directive is envisaged to apply to claims brought under fault-based liability regimes and a mechanism is foreseen for defendants to rebut the presumptions (for more information on the European AI liability directive see https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en). The European AI liability directive will apply on top/separately to the European AI Act (for more information on the European AI Act, see https://cms.law/en/che/publication/ai-series-1-the-eu-artificial-intelligence-act-is-almost-ready-to-go).

3.      Action items for businesses to limit their AI-litigation risks

 

  • Choose and test AI systems that are robust and tested for your purposes. In addition, adequate training should be provided to users on how to use AI systems safely, including adequate security and governance to ensure that systems are not just outsourced to third party providers but still remain supervised under a company's own responsibility;
  • Inform customers if AI is used to deliver services and provide disclaimers to limit your liability exposure;
  • Make sure to document the design and running of an AI-system so that you are capable to explain how the systems works if required by supervisory authorities. Even in usual civil court litigation, under evidentiary principles, an AI-system-operator may be required to explain his system and non-capability of such can e.g. in Switzerland be assessed under the free discretion of a judge to the detriment of an AI-system-operator. Finally, be reluctant to use AI-generated evidence itself as this might not be considered credible under the free discretion of a judge (e.g. deepfake videos of a party's speech which proves to be untrue – all of which already happened in court cases);
  • Apply contractual carve outs limiting liability of warranties for defective AI-systems or ‑outputs (if the system is integrated into an output-generating product);
  • Apply processes to safeguard risks to consumers, e.g. policies to guarantee the review of AI outputs for biases or errors, exclusion of confidential information on the input level and/or to pre-ensure compliance with data protection principles when feeding AI with data on the input level);
  • Provide technical options for contestability in the event a customer disagrees with the use of AI or the output generated by it. Immutability of AI-output inevitably leads to conflicts with customers' remedial claims of law. Being able to address and change unsatisfactory AI/output generated will help to avoid lengthy liability disputes in the first place. Be aware that technical non-contestability may lead to court decisions calling for permanent injunctions forbidding the use of a specific AI-tool. This is neither in the interest of the AI-tool-developer nor of the users who rely heavily on it when integrating it into their own products. Permanent injunctive ban would lead to a general "lack of availability" which may trigger additional liability claims towards the AI-tool-developer. Such scenarios are to be avoided at its best.
  • Be aware of jurisdictional risks when offering/introducing an AI-system into the market. Where an AI system is developed or deployed in different jurisdictions, it can be challenging to determine which jurisdiction's law and venue is ultimately applicable. E.g., multilateral treaties on jurisdiction suggest that actions can be commenced in the territory where a damage is suffered, where the infringing acts were committed,  where the defendant is domiciled or e.g. in which market an intellectual property right violation has its impact.

Authors

Dirk Spacek
Dr Dirk Spacek, LL.M.
Partner
Co-Head of the practice groups TMC and IP
Zurich

Newsletter

Stay informed by subscribing to our newsletter.