Generative AI, LLMs and AI notetakers: A new threat to legal professional privilege?
Key contacts
Quick read
The rapid evolution of technology, and its use in the legal industry, has potentially created a whole new territory of risk. A recent US case, United States v Heppner, illustrates one of these new risks; how privilege operates in an increasingly digital environment.
In United States v Heppner, the court held that a defendant’s Claude generated materials were not protected by “attorney-client privilege” or as a “work product”. The Court concentrated on the fact that, by inputting the information into the AI tool, confidentiality had been lost. Whilst the laws of England & Wales are different in detail, the judgment may provide insight on how the English Court would respond in similar circumstances.
Uploading privileged materials into a consumer grade AI service is likely to be characterised as disclosure inconsistent with maintaining confidentiality and, thus, risks waiving privilege. This risk is heightened in insured defence matters, where insurers and other claims stakeholders are typically involved in the conduct of the defence. The wider circulation of privileged material increases the chance of inadvertent disclosure and of steps being taken that fail to preserve privilege (particularly where not all participants fully appreciate the risk).
Not all AI is the same
Consumer grade AI/LLM tools are often free and designed for ease of use by individuals. Providers of such tools typically retain broad rights to use any information uploaded to them, including for the purposes of training and improving the model. By contrast, enterprise grade AI tools can be configured and deployed in ways that mitigate these risks. For example, by contractually preventing the provider from using client data for training purposes. Not all AI tools carry the same risks. This article focuses primarily on the risks arising from lawyers' and clients' use of consumer grade AI tools.
Generative AI is becoming standard practice
Generative AI has moved from novelty to reflex. People paste text into chatbots to summarise, rephrase, spot issues, draft replies and analyse documents. There are also a multitude of AI notetakers which can record and summarise the content of video and telephone calls.
The use of these products collide with a basic (and unforgiving) feature of privilege; if privileged material is copied to a third party in circumstances that are inconsistent with confidentiality, the privilege holder may have waived that privilege.
A warning from the US: United States v Heppner
The founder of a Texas based financial services company was charged in the Southern District of New York in connection with an alleged fraud. In the course of the investigation, federal agents seized electronic devices and identified approximately 31 documents on those devices that reflected exchanges with Anthropic’s Claude (a generative AI assistant).
Defence counsel asserted that those materials were privileged because they were created by the founder to help analyse his legal position and to assist his lawyers. However, on 10 February 2026, it was ruled that the Claude generated materials were not protected by either “attorney - client privilege” or the “work product” doctrine, and, as such, must be produced to the Government.
Whilst the source materials were likely privileged, because the founder sought assistance from the consumer grade AI tool, the materials effectively became available to those investigating him. The decision appears to have turned, in particular, on the Court’s view that the defendant had disclosed the material to a third party AI service whose terms stated that inputs were not confidential (a usual component of consumer grade AI tools) and that the tool did not provide legal advice. In other words: using the AI service was treated as inconsistent with maintaining confidentiality.
The Courts of England & Wales
Heppner is not a binding authority in England & Wales, and the US concepts of “work product” and “attorney - client privilege” do not map perfectly onto “legal advice privilege” and “litigation privilege”. However, the case does provide insight into how the English Court may respond in similar circumstances. Under English law, legal professional privilege can be lost if confidentiality is not maintained so it appears likely that the Court would make similar considerations as those in Heppner.
English Courts will examine the mechanics and contractual terms of the tool being used, and may be unconvinced by claims to privilege that ignore obvious third-party disclosure. The Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance (October 2025) states “you should treat all public AI tools as being capable of making public anything entered into them”. This suggests that the English Courts could take as a starting point that confidentiality may have been lost as soon as any privileged information is inputted.
This also introduces a new vector for satellite disputes in litigation. We anticipate parties will increasingly seek disclosure about how such tools were used, what prompts were entered, whether third party systems processed client information, and whether any privilege was inadvertently waived.
English law recognises that privilege may sometimes survive disclosure for a strictly limited purpose (for example, disclosure to a regulator on a limited basis). The difficulty with consumer grade AI services is that the disclosure is rarely “limited” in any meaningful sense. The recipient is not a defined group, the processing is not transparent, and the provider’s terms often contemplate retention and reuse. Those features may be inconsistent with a limited waiver argument.
The insurance dimension: tripartite privilege and how it can unravel
The insurer, insured and legal defence team (often instructed on a joint retainer basis by the insurer and the insured) typically exchange privileged material as the claim develops. English authorities recognise the practical need for that information flow, and concepts such as common interest privilege (and, in some circumstances, joint privilege) are used to explain why sharing privileged material between insurer and insured does not necessarily waive privilege as against the world.
If any party in the tripartite chain copies privileged advice into a consumer grade AI/LLM tool (or allows a consumer grade AI notetaker to record a privileged call), the opposing party will have a credible argument that confidentiality has been compromised.
Whether a court would characterise that as a full waiver, a partial waiver, or as conduct that simply makes the content obtainable from elsewhere, will be fact sensitive. What is not fact sensitive is the potential practical harm and the costs that could be incurred in arguing about whether privilege has been waived.
This is not merely a law firm problem. Claims handlers, insureds’ employees and even board members may use AI “helpfully” without appreciating that they are dealing with the litigation equivalent of forwarding a law firm’s advice to a stranger.
In practice, where does generative AI create privilege risk?
1) Copying privileged advice into a consumer AI/LLM tool
The clearest risk case is also the most common: someone pastes privileged legal advice (or draft submissions, witness statements, reserve discussions, or privileged correspondence) into a consumer chatbot to obtain a summary, a rewrite, a “second opinion”, or an action list. Whilst every case will be fact specific, we anticipate the outcome is likely to be the same as in United States v Heppner i.e. there has been disclosure of privileged information to a third party.
Under English law, voluntary disclosure of privileged material to a third party outside the privileged relationship is a classic way to waive privilege. The more the terms of service make clear that the provider is entitled to use, retain or disclose the data, the more difficult it becomes to argue that confidentiality has been preserved.
2) Client-to-lawyer communications
A subtler scenario is where a client uses an AI/LLM tool to consider their position and then, at a later date, sends that output to its legal team.
A client to lawyer communication can, in principle, attract legal advice privilege. However, if the content was first shared with a consumer AI service, an opponent may argue that (i) AI systems and tools do not have legal personality and so, if a person gets legal advice from an AI tool with no lawyer involvement, that is clearly not going to benefit from legal advice privilege (ii) any privilege was waived by the earlier disclosure, or (iii) the output is a “pre-existing” document available from a third party and therefore not protected in practical terms.
Even where privilege can properly be maintained over the client-lawyer communication itself, the reality is that the same text may exist in the AI provider’s logs or systems and could be obtained from there (or from devices) without needing to break privilege at all.
3) The corporate ‘client’ problem (Three Rivers)
For corporates, there is an additional trap. Following the Three Rivers line of authorities, the English courts have taken a narrow view of who constitutes the “client” for legal advice privilege in an organisation. In broad terms, only those individuals authorised to seek and receive legal advice on the organisation’s behalf fall within the privileged “client” group; privilege will only attach to communications between the lawyer and the “client”. As such, AI, when used in investigations or legal analysis, may not be covered if it does not directly involve these authorised individuals.
4) Litigation privilege
Litigation privilege can cover communications with third parties (for example, experts, investigators etc.) where litigation is reasonably contemplated and the dominant purpose of the communication/document is the conduct of that litigation.
Despite the wider circumstances to which it applies, litigation privilege may not be sufficient to cover consumer AI/LLM tool use. Whilst there is no reported English authority on this point, a party that wants to argue that an AI tool sits within a privileged workflow will likely need to be able to show that it is being used as a tightly controlled service provider (contractual confidentiality, no training on inputs, restricted access, defined retention/deletion, and appropriate security) and that its use is genuinely for the dominant purpose of litigation.
5) AI notetakers
AI notetakers typically join calls (sometimes as a visible participant, but not always), record audio, transmit it to the provider for transcription and analysis, and generate transcripts and summaries. Whilst it is true that privileged communications routinely pass through third‑party infrastructure (email servers, conferencing platforms, cloud storage), the crucial distinction is that AI notetakers do not merely carry the communication: they process its substance and produce derivative work product, often under terms that allow the provider to use the data to improve its systems.
In August 2025, another US case (Brewer v Otter.ai Inc., N.D. Cal.) concerned allegations that, among other things, Otter’s notetaker intercepted meeting content of non‑users and used conversation data to train automatic speech recognition and machine‑learning models without adequate consent.
Whatever the merits of that claim, it illustrates the practical point that could arise when assessing if the use of AI notetakers can lead to a waiver of privilege. It is risky to assume that an AI notetaker is merely ‘taking notes for you’. If the provider is entitled (or alleged) to use the data for its own model development, the same issues arise as with the use of consumer grade AI tool.
Practical considerations
Legal advisors should take a proactive role in ensuring that everyone involved in a legal case, even in a minor role, is warned not to enter sensitive information into consumer AI/LLM tools.
Where enterprise grade AI tools are to be used, lawyers should work closely with their information security teams to rigorously check the provider’s terms and conditions first. They should ensure that they complete detailed due diligence and do not implement tools that retain or train their underlying models on data that they input into the tools. These steps should, at the least, reduce the risk of losing/waiving privilege.
Conclusion
The Heppner ruling is not an English law authority, but it is a useful reminder that English courts may take a tough view of privilege claims that sit uneasily with obvious third‑party disclosure. Confidentiality is key to privilege so it is important to give proper consideration to whether it is being maintained by everyone involved in the legal process.
In England & Wales, the safest working assumption is that, if you would not forward a law firm’s advice to a stranger, do not paste it into a consumer grade AI/LLM tool or allow an external AI notetaker to record it. The convenience is not worth the risk.
References
United States of America v. Heppner, No. 25 Cr. 00503 (JSR) (S.D.N.Y.)
Artificial Intelligence (AI) – Judicial Guidance (October 2025)
Three Rivers District Council and Others v Governor and Company of the Bank of England (No.5) [2003] EWCA Civ 474 (Three Rivers (No.5))
Three Rivers District Council and Others v Governor and Company of the Bank of England (No. 6) [2004] UKHL 48 (Three Rivers (No.6))
Brewer v Otter.ai Inc., Case No. 5:25-cv-06911 (N.D. Cal.)