AI in arbitration and the courts: Focus on Türkiye & Ukraine
Artificial intelligence (AI) is rapidly transforming the landscape of dispute resolution worldwide, prompting courts and arbitral institutions to reconsider traditional workflows and ethical boundaries. Both Türkiye and Ukraine stand at the forefront of this evolution, each navigating the integration of AI into their judicial and arbitration systems against distinct legal, social, and technological backdrops. While Türkiye emphasises pragmatic adoption within established legal safeguards and policy frameworks, Ukraine’s judiciary has responded to extraordinary challenges – such as the pandemic and ongoing conflict with Russia – by accelerating digital transformation and setting clear ethical boundaries for AI use. This article explores how both jurisdictions are harnessing AI’s potential to enhance efficiency, accessibility, and fairness in dispute resolution, while remaining vigilant about the risks and ensuring that human judgment remains central to the administration of justice.
Türkiye: Promise, Pragmatism, and the Path Ahead
The use of AI is moving rapidly from experimentation to workflow in dispute resolution. In Türkiye, both in litigation and arbitration, parties, arbitral tribunals and the courts are exploring how AI can streamline routine tasks, sharpen analysis, and improve access to justice – while remaining alert to well‑known risks around accuracy, confidentiality, and fairness. There is no debate that integration of AI into dispute resolution mechanisms is inevitable, but its conscious use is the key: AI will be a tool, not a decision‑maker.
Legal touchpoints
Turkish law does not yet have specific rules for resolving disputes involving AI. Nevertheless, the existing general framework provides meaningful safeguards. Turkish legislation ensures procedural fairness, while arbitration is subject to the duties of independence and impartiality. Attorneys have professional and ethical obligations, and the law on protection of personal data also plays a role. These factors collectively regulate how the technology may be used in practice. In cross-border disputes seated in Türkiye, parties are increasingly looking to international soft law for guidance. Recent guidance from overseas institutions emphasises that, while AI can be used for administrative and analytical support, core adjudicative functions must remain human, outputs must be verified, and confidentiality must be preserved. These themes align with Turkish procedural values and public policy.
Current landscape
Türkiye’s 2025 Presidential Programme includes the following statements: “Decision support systems in the judiciary will be strengthened by artificial intelligence, and recommendation systems supporting judicial activities will be developed.” It should be noted that the Presidential Programme is crucial for Türkiye because it provides a strategic roadmap for sustainable economic growth, green and digital transformation, and long-term national development goals. In line with the National Artificial Intelligence Strategy for 2024-2025, the Scientific and Technological Research Council of Türkiye (TÜBİTAK) Artificial Intelligence Institute is developing projects in a range of areas, including justice. The Ministry of Justice is also currently undertaking various projects in this area.
Practical uses in arbitration
AI delivers near‑term value in document‑intensive workflows, especially in alternative dispute resolution (ADR) methods. While these applications are also relevant to litigation, they are particularly impactful in ADR due to its cross-border nature, multi-lingual proceedings, and reliance on virtual hearings, where efficiency and flexibility are critical. AI‑enabled tools can assist with facilitating the management of scheduling and enhancing the efficiency of remote proceedings in virtual hearings, document review, chronologies, issue spotting, and drafting. In international cases, machine translation and transcription can accelerate multi‑lingual proceedings, and summarisation tools can help arbitrators and tribunals. However, these tools should be treated as support for decision-making, not as a substitute for legal judgement. Human judgement remains essential to guard against hallucinations by AI. Tribunals and counsels may use AI to accelerate process, but they must validate their sources, keep records of review, and preserve confidentiality.
In summary, the near-term value of AI in Türkiye lies in its ability to deliver speed, organisation and accessibility, rather than in replacing legal or adjudicative judgement. By adopting targeted measures alongside clear safeguards, both arbitration users and courts can boost efficiency while safeguarding the integrity and fairness that inspire confidence in the system.
Ukraine: cautious approach
Facing the challenges brought by the pandemic and the war, the Ukrainian judiciary within recent years has undergone significant development with regard to the implementation of modern digital technologies. In particular, the Electronic Court system was introduced, allowing parties to submit documents to the court in electronic form and familiarise themselves with the case materials remotely, as well as allowing courts to issue their documents in electronic form, making the proceedings paperless. Additionally, the relevant regulations were put in place and technology developed for the participation in hearings via videoconference, so it is now common for a party to connect to a hearing remotely.
With the development of AI, it is clear that state and arbitration tribunals have to address its use and implement it in their day-to-day work.
AI in state courts
In September 2024 the Congress of Judges adopted the Code of Judicial Ethics (Code), which sets clear boundaries for how judges can use AI. According to the Code, AI is allowed only if it does not affect a judge’s independence and fairness, and if it is not involved in assessing evidence or the decision-making process. AI can help with information retrieval and analysis or procedural tasks, but it is crucial that the use of AI complies with all relevant laws.
In addition to the Code, to frame the use of AI by judges and their assistants, the High Anti-Corruption Court (HACC) issued an order approving the principles of using AI in court. The HACC takes a practical and cautious approach, similar to that in the Code, stating that the use of AI may support but not replace judicial decision-making. Its rules make clear that AI must never interfere with justice. Sensitive court documents must not be uploaded to AI tools to protect confidentiality. Court staff are expected to use AI responsibly, following ethical standards such as professionalism, integrity, and respect for the law. Importantly, AI shall be used only for administrative tasks and never in actual court proceedings, ensuring that judicial independence and fairness remain untouched.
AI is increasingly being used also by attorneys, although there are currently no clear regulations governing its use. Nevertheless, when using AI, attorneys shall adhere to the general requirements for providing legal services as outlined in relevant legislation and the rules of professional ethics – particularly regarding client confidentiality and the quality of service. Sensitive information shall not be shared with AI tools without the client’s express consent, and any output generated by AI should be carefully reviewed, as inaccuracies or hallucinations are not uncommon.
Another aspect of AI use arises when parties use AI to substantiate their position within court proceedings. In one case, a party asked the Supreme Court to review its interpretation of the term “voluntary commitment” based on the opinion produced by ChatGPT. In response, the Supreme Court stated that using AI-generated content to challenge a court’s decision is not only inappropriate but also an abuse of legal process. Such actions show disrespect to a judge and violate an attorney’s duty to act with care and honesty. Attorneys must ensure their submissions are based on solid legal reasoning and ethical standards, regardless of whether AI tools were used to help prepare them. The court criticised the AI-generated opinion as “clearly unfounded and knowingly baseless”, pointing to a lack of proper legal analysis.
In another case, a party challenged the Court’s decision on the grounds that the Court of Appeal had failed to consider opinions generated by ChatGPT and GROK when interpreting provisions of a land lease agreement. The Supreme Court upheld the lower court’s decision and emphasised that AI should be used to support and strengthen the rule of law. In this instance, however, the party used AI not to promote the proper administration of justice, but to question and appeal conclusions already reached by the Court.
In both cases, the Court emphasised that AI is just a support tool. It cannot replace judges or serve as a source of law. Legal decisions must be based on legislation and court practice, but not machine-generated suggestions, and attempting to use AI to challenge a court’s authority undermines the justice system and public trust.
Thus, the approach of the Ukrainian state courts to the use of AI is unanimous and clear –AI cannot be used as a source of evidence, challenge the decision of the court or replace a human in the decision-making process. However, it can be used as an auxiliary tool, with proper checks of results provided by AI, to help attorneys and judges in their work.
Use of AI in arbitration
The use of AI in arbitration proceedings held by the International Commercial Arbitration Court at the Ukrainian Chamber of Commerce and Industry (ICAC) is not currently regulated. Both the ICAC Rules and the Law of Ukraine “On International Commercial Arbitration” do not address the issue of AI usage. However, these documents allow parties to agree on the procedure of arbitral proceedings at their own discretion, which gives the parties room to agree on AI usage if it does not contradict the ICAC Rules and effective legislation. In this regard, it shall also be mentioned that the use of AI and challenges related to it are being actively discussed within the arbitration community.
Considering the guidelines and practices already developed by state courts, along with the ongoing discussions within the arbitration community regarding the use of AI, the ICAC may eventually address this issue in its regulations.
Conclusion
The experiences of Türkiye and Ukraine illustrate both the promise and the complexity of integrating AI into judicial and arbitral processes. In both countries, AI is recognised as a powerful tool – capable of streamlining administrative tasks, improving document management, and facilitating remote proceedings – but not as a substitute for human decision-making. Legal and ethical frameworks in each jurisdiction emphasise the importance of judicial independence, procedural fairness, and the protection of confidential information. The courts in Ukraine have rejected attempts to use AI-generated content as a basis for legal argument and evidence, reinforcing the principle that technology must serve, not supplant, the rule of law. Meanwhile, Türkiye’s evolving policy landscape and ongoing projects signal a commitment to responsible innovation. As both jurisdictions continue to refine their approaches, their experiences offer valuable lessons for the global legal community: AI’s greatest value lies in augmenting, not replacing, the human elements of justice, and its adoption must be guided by clear safeguards, transparency, and respect for fundamental legal principles.