Key contacts
Taiwo v Homelets of Bath Ltd: High Court warning on false AI-generated authorities and lawyer risk
The High Court’s decision in Taiwo v Homelets of Bath Ltd[1] is a clear, practical marker of the judiciary’s growing intolerance for fabricated citations and undisclosed ghost assistance generated or enabled by AI in court proceedings. In Taiwo, the Court refused permission for the applicant to appeal against findings of fundamental dishonesty and dismissed related applications, but its wider significance lies in its explicit condemnation of bogus authorities (one plainly “falsely created by AI”) and the Court’s signal that any lawyer (regardless of attribution or compensation) connected to such material, risks misconduct referral or even contempt (citing R. (on the application of Ayinde) v Haringey LBC[2]). The Court also made a limited civil restraint order (CRO) in light of the persistent totally without merit applications compounded by the reliance on false citations and imposed significant adverse cost consequences.
This judgment is best read alongside the Divisional Court’s decision in Ayinde and the governance-focused analysis we set out in our recent article: Artificial intelligence in litigation practice post Ayinde and Mazur: what law firm leaders must do now.
Factual background
The underlying personal injury claim concerned harassment and assault dating back to events in 2010, with liability found in 2018 and damages to be assessed separately. Acting through a litigation friend, the applicant sought damages in the sum of around £2 million. At the quantum trial, the respondent argued that the applicant had been “fundamentally dishonest” in respect of her claim. The Court found that the applicant was dishonest on several core matters, which lead to dismissal of the entire claim. Following consequential orders, the judge awarded indemnity costs and joined the applicant’s litigation friend and another individual (a solicitor) as parties for potential non‑party costs; later orders made each responsible for cost liabilities to the respondent.
Prior to the November High Court hearing, the landscape was characterised by repeated applications by the applicant and her litigation friend, including her skeleton argument filed on 27 March. Additionally, the applicant’s skeleton was amended and filed in November, by another solicitor on the applicant’s behalf, albeit the solicitor declined to confirm or deny that he was the author (however, a witness statement by the litigation friend confirmed that the applicant engaged the solicitor on a ‘pay-per-task’ basis).
AI misuse and false authorities
The judge found that a cited authority (“Irani v Duchy Farm Kennels [2020] EWCA Civ 405”) in the skeleton argument filed in March was false and “no doubt falsely created by AI.” The Court asked the applicant for a copy of the authority before the November hearing, but none was provided.
The applicant’s litigation friend alleged that the false authority was not his failure (as the applicant filed the skeleton herself and had received assistance from “various people”). The Court rejected the litigation friend’s assertions on the basis that he “was as much in control (whether with the help of unidentified lawyers, or AI, or both) of the 27 March Submission”.
The Court further identified another non‑existent Court of Appeal authority (“Chapman v Tameside Hospital NHS Foundation Trust [2018] EWCA Civ 2085”) included in an earlier skeleton signed by the applicant’s litigation friend, noting that although a county court case existed between those parties in 2016, there was no such 2018 appeal. The judge stated that the false reference “can be ‘recreated’ through Google’s AI Overview function.”
The Court emphasised its concern that there have now been “a number of judgments in which the presentation of false authorities to Court has, unsurprisingly, been deprecated”. Crucially, the Court confirmed that reliance on false citations in proceedings is just as unacceptable by a litigant in person or litigation friend, although “the sanction for having done so may not necessarily be the same as those applicable if a registered lawyer is responsible for the submission.” The Court emphasised that if a lawyer has written or reviewed material (which appears to be the case here) - whether credited or not, paid or unpaid - for use by a litigant representing themselves, they may risk being referred for misconduct or even face potential contempt once identified. It should be noted that the applicant was directed (following the hearing) to identify, by statement of truth, the nature and extent of external legal input and the identity of the lawyer (who had originally been ‘identified’ by the litigation friend in an earlier witness statement). The judge specifically referenced the Divisional Court’s ruling in Ayinde v Haringey LBC, framing this within evolving judicial expectations regarding AI verification responsibilities.
The Court also made a limited CRO against the applicant and the litigation friend, citing persistent “totally without merit” applications and a “complete lack of discipline” in interactions with the Court; conduct compounded by the proliferation of documents with false citations.
Regarding costs, the Court commented that, for the sake of justice, the applicant's approach warranted a costs order against her. Ultimately, the applicant was directed to cover 75% of the respondent’s reasonable expenses from the date when permission was denied on paper, with an interim payment required. The Court highlighted the additional burden placed on both it and the respondent due to the reliance on incorrect authorities.
Commentary: implications for solicitors’ risk
In summary, Ayinde confirms that generative AI is not a reliable legal research tool and imposes a non‑delegable duty on legal practitioners to verify authorities and citations against authoritative repositories; it also emphasises leadership accountability for firm‑wide training, verification and supervision. The Court in Taiwo effectively applied the same verification principle to the real‑world messiness of litigation practice: if an AI system (or a third party using AI) introduces spurious citations into skeletons or submissions, and a solicitor author or reviewer allows them to reach the Court, the professional risk attaches to the solicitor.
For solicitors and their insurers, five risks are salient.
First, source‑verification is now a live litigation risk, not a mere best practice. Taiwo’s explicit identification of a non‑existent Court of Appeal citation that “can be recreated” by AI, underscores how easily authoritative looking references can pass a superficial check. Verification must be evidencable against National Archives, official law reports, and reputable databases.
Second, ghost assistance and undisclosed external drafting elevate exposure. The Court examined document properties, authorship footprints, and the role of “pay‑per‑task” providers, then drew inferences about control “with the help of unidentified lawyers, or AI, or both.” Where a solicitor contributes in any way to a filing, the duty to the Court is activated; with potential misconduct or contempt if false authorities are advanced.
Third, incident response matters. Once the Court queries an authority (as it did in this case prior to the hearing), delay or evasion compounds the problem. Ayinde‑aligned practice is to withdraw defective material promptly, correct the record, and communicate candidly with the Court and the other side; Taiwo shows the adverse inferences when parties cannot produce the authority and give shifting explanations.
Fourth, leadership oversight and training are risk controls that should be in place. Our prior article stresses firm‑wide AI policies, role‑based responsibilities, and mandatory training on hallucinations, spurious citations and jurisdictional drift, together with embedded verification checklists and authorised‑person[3] sign off for any court document. Taiwo supplies the case study that makes these controls “reasonably foreseeable” risk mitigations for professional indemnity purposes.
Fifth, adjacent exposures can escalate in parallel: CROs, indemnity or enhanced costs, and non‑party costs. Taiwo imposed a limited CRO after identifying persistent totally without merit applications and highlighted the burden created by false citations and proliferating filings. The decision also shows how non‑party costs risks for litigation friends and funders can crystallise where there is control of the litigation or enabling conduct.
Our December article argued that the immediate implications of AI are governance, verification, and accountability, citing Ayinde’s insistence on authoritative source checking and leadership responsibility. Taiwo is the practical complement: a High Court illustration of what happens when those controls fail in live litigation, and how quickly the risk migrates from drafting quality to credibility findings, restraint orders, and costs sanctions, together with explicit warnings that lawyers associated with AI‑driven fabrications may face misconduct or contempt.
[1] [2025] EWHC 3173 (KB)
[2] [2025] EWHC 1383 (Admin)
[3] See Section 18 of the Legal Services Act 2007 for more information on authorised persons.