Key contacts
Artificial intelligence in litigation practice post Ayinde and Mazur: what law firm leaders must do now
Artificial Intelligence (AI) is reshaping litigation practice for law firms the most immediate implications are not technological, but about governance, training and accountability. The Divisional Court’s judgment in Ayinde[1] makes plain that partners and managers must ensure everyone who provides legal services understands their duties when using AI, and that verification and supervision of AI output is non‑negotiable. Recent first‑instance decisions, together with the SRA’s guidance and debate following the decision in Mazur[2], reinforce that firms must define who may do what, under whose authority, and with what supervision.
The focus of this article is the standards that now apply to firms in relation to AI, the requirement for leadership responsibility and training following Ayinde, the practicalities highlighted by Mazur, and how these principles apply to witness statements, expert evidence and day‑to‑day drafting.
Ayinde: verification duties and accountability
In Ayinde, the court held that generative AI tools are not capable of reliable legal research and that practitioners who use such tools must verify the output against sources before relying on it in a professional context. The court identified those authoritative sources expressly: official legislation databases; the National Archives’ judgments; the official Law Reports; and reputable legal publishers. The court also confirmed that this duty applies whether the lawyer used AI personally or relied on another’s work, for example work done by trainees or paralegals.
The court went further, fixing organisational accountability: practical and effective measures must now be taken by those with leadership responsibilities (including managing partners, heads of department and heads of chambers) to ensure that everyone providing legal services understands and complies with their professional and ethical obligations and their duties to the court when using AI. In future, in Hamid‑style hearings[3], the court may enquire as to what steps leaders have taken and the available sanctions are serious and include public admonition, costs and wasted costs, strike out, referral to regulators, contempt and, in extreme cases, referral to the police. Critically, Ayinde restates that a solicitor cannot outsource accuracy to a client; it is the lawyer’s responsibility to ensure that authorities and quotations put before the court are genuine and relevant.
Recently in Taiwo v Homelets of Bath Ltd[4], the High Court has reaffirmed the Ayinde‑style duty to verify authorities by condemning fabricated, AI‑generated case citations (including a non‑existent Court of Appeal authority), and stressing that while a litigant in person may be afforded some leeway, the position for lawyers is markedly more serious; potential misconduct or contempt. While there was no immediate contempt finding or personal costs order imposed solely for the citation fabrication, the false authority led to credibility findings against the Claimant, contributed to the making of a civil restraint order, and fed into robust adverse costs consequences and the prospect of professional referral where a lawyer was involved.
Mazur and AI: clarifying who may “conduct litigation”
The decision in Mazur underscores that non‑authorised[5] staff cannot conduct litigation even under supervision and AI has only intensified the boundary issues that follow. As firms redesign workflows and embed automation, there is a foreseeable risk that non‑authorised staff (or software acting as their proxy) stray into reserved activities or decisions that must remain with an authorised solicitor. The Law Society has urged the SRA to clarify how Mazur applies where AI systems are used in litigation, noting uncertainties around statements of truth and tribunal representation rules, as well as duties when unauthorised persons have crossed the line. Until clear regulatory guidance is in place, prudent firms should keep critical litigation tasks - strategy, issue of proceedings, settlement decisions, pleadings, statements of case, witness statements, expert instructions and authorities relied upon - squarely within SRA authorised hands, with documented human judgment and sign‑off. This is as much about competence and candour to the court as it is about the statutory definition of reserved activities.
For more on the implications of the Mazur decision, please see our Professional Indemnity Space Law Now: A new era for litigation teams following Mazur & Stuart v Charles Russell Speechlys LLP [2025] EWHC 2341 (KB).
What “good” looks like inside firms: policy, roles and training
Ayinde has emphasised that effective leadership means a need for a firm‑wide AI policy which must identify approved tools; carefully detail whether and what client, confidential or personal data can be uploaded to each tool; require verification of any AI‑assisted legal research or analysis against authoritative sources; and mandate documented human review and sign‑off for any AI‑assisted court document or client advice.
Responsibilities should be role‑based: partners and practice heads being responsible for implementation; compliance and risk teams monitoring compliance and exceptions; knowledge teams maintaining a directory of authoritative repositories; and supervisors signing off AI‑assisted submissions. Training for law firm staff should be mandatory, scenario‑based and tested for all fee‑earners and relevant staff. It should cover known failure modes (hallucinations, spurious citations and jurisdictional drift), confidentiality and data minimisation, the SRA Codes and the judiciary’s guidance (see below for more on the judiciary’s own guidance) on accuracy and responsibility. Firms should also rehearse incident response: if defective material is filed: including, prompt withdrawal, corrective communication to the court and parties involved, client and PI notification where appropriate and documented remediation to help mitigate downstream consequences.
AI is also accelerating changes to legal support roles. One national firm has proposed removing its “litigation assistant” role, moving some billable work to paralegals and re‑scoping team PA duties, with AI cited as a factor. Whatever the organisational model, Ayinde requires that training and supervision keep pace. If routine work is consolidated into fewer hands aided by automation, controls must be upgraded, for example, verification checklists embedded in templates, clear escalation paths for ambiguous outputs, and supervisor sign‑off before filing.
Where non‑authorised persons are involved, they must ensure they understand the limits of their role post‑Mazur and the requirement that an authorised solicitor remains accountable for any document placed before the court.
Witness statements, expert evidence and day‑to‑day drafting: first‑instance application of Ayinde
We have seen that the County Court has begun to apply Ayinde’s standards to everyday litigation. In a recent Birmingham case[6] concerning an AI‑generated witness statement, the court found that fictitious authorities had been advanced and imposed a wasted costs order. The judge emphasised that the problem was not the mere existence of an AI feature in a commonly used case management platform, it was the failure to verify outputs and the absence of adequate supervision and document control that allowed false material to be filed and even to be verified by a statement of truth. The court again cited Ayinde’s verification requirements and the SRA’s guidance that where work is conducted on a solicitor’s behalf by others, the solicitor remains accountable for that work. The takeaways are practical: ensure any citation is checked on trusted databases; do not permit any document to be filed without review by an SRA authorised person; put in place secure signature protocols; label drafts clearly; and keep an auditable record of verification.
Judicial warnings have also extended to expert work. A senior High Court judge at the recent Bond Solon Expert Witness Conference, described a solicitor’s insistence on providing an expert with an AI‑generated draft report[7] as a gross breach of duty, and cautioned that experts who accept such instructions compromise their independence.
Firms should make explicit in expert instructions that AI‑generated draft reports are prohibited and that experts must produce their own independent analysis. Internally, fee‑earners should be trained that AI may assist in organising documents or creating neutral summaries, but it must not be used to draft, ghost or “seed” expert opinions.
The judiciary’s own guidance
The judiciary’s refreshed AI guidance aligns closely with Ayinde. Judges may use approved tools for administrative tasks and summarisation, but they remain personally responsible for anything produced in their name and are warned against relying on public chatbots for legal research or analysis.
The guidance anticipates that legal representatives and litigants in person may use AI and encourages inquiry into verification where AI use is suspected, including vigilance for hallucinated case law and “white text” prompt injection. In parallel, the Civil Justice Council has convened a working group, chaired by the Deputy Head of Civil Justice, to consult on whether rules are needed to govern AI use by legal representatives in preparing pleadings, witness statements and expert reports. Law firms should anticipate more prescriptive expectations around disclosure of AI use in court documents, verification standards and safeguarding the independence of expert evidence (as discussed above).
A concise blueprint for firm leaders
Law firm’s core duties remain unchanged: accuracy, independence of judgment and integrity to the court, and a proportionate blueprint is now clear. It is essential that partners and managers adopt a comprehensive firm-wide AI policy (along with a rehearsed incident response), ensure explicit verification against authoritative sources for any AI-assisted analysis and implement mandatory and tested training. Above all, keep critical litigation decisions and court‑facing documents within authorised hands, and evidence the human judgment that Ayinde and Mazur require. Done well, AI can become a disciplined accelerator of competent practice and not a shortcut around professional responsibility.
[1] R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin)
[2] Mazur v Charles Russell Speechlys [2025] EWHC 2341
[3] A Hamid-style hearing originates from the case R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin). It is not a substantive hearing on the merits of a client’s case, but rather a disciplinary or supervisory hearing convened by the High Court (usually the Administrative Court) to scrutinise the conduct of legal representatives in judicial review proceedings.
[4] [2025] EWHC 3173 (KB)
[5] Individuals who are not authorised persons under the Legal Services Act 2007 and therefore cannot carry out reserved legal activity. See Section 18 of the Legal Services Act 2007 for more information on authorised persons
[6] Ndaryiyumvire –v– Birmingham City University (No. L20ZA723) County Court at Birmingham, HHJ Charman, 14 October 2025
[7] Reported in the Bond Solon Expert Witness Survey 2025