As courts start to confront how generative AI fits into privilege and work‑product doctrine, early decisions are already pointing in different directions. United States v. Heppner is often cited as a warning signal, but it should not be read as establishing a general rule about AI and privilege. The legal community is chomping at the bit for AI-related case law, but we need to proceed carefully.
Properly understood, Heppner is a narrow, fact‑bound criminal decision that turns on a flawed analytical move: Treating AI systems as legal interlocutors rather than as tools.
What Heppner Held
In Heppner, a represented criminal defendant used Anthropic’s Claude to analyze evidence and assess legal exposure using information previously provided by counsel.
The court held that:
- The AI inputs and outputs were not protected by attorney‑client privilege, and
- They were not work product, because they were created independently by the defendant, not “by or at the behest of counsel.”
Read broadly, the decision suggests that a client’s independent AI use can destroy both privilege and work‑product protection. That conclusion is both doctrinally unsound and out of step with other courts.
The Core Distinction: Tool vs. Interlocutor
As Bridget McCormack and Shlomo Klapper explain in “The Machine Isn’t the Interlocutor: Why United States v. Heppner Gets Privilege Wrong,” Heppner starts with the wrong question. Instead of asking whether existing privilege was waived by using technology, the court asked whether the AI interaction itself independently satisfied the elements of privilege. That framing only works if the AI system is treated as a third‑party recipient of communications. It isn’t.
Current versions of AI systems:
- Have no independent agency,
- Cannot testify or volitionally disclose information, and
- Function no differently, in privilege terms, than cloud word processors, email platforms, or document‑review software.
Courts have never held that drafting a privileged memo in Word or storing it in other cloud systems like Google Docs waives privilege. Generative AI, as it currently exists, does not change that analysis.
Why Other Courts Got It Right
Just one week earlier, the Eastern District of Michigan reached the opposite result in Warner v. Gilbarco, holding that AI systems are “tools, not persons,” and refusing to compel production of AI‑assisted litigation materials.
Warner applied settled doctrine:
- Privilege attaches to the information, not the medium.
- Technology processing is not third‑party disclosure.
- The waiver inquiry turns on human risk – testimony, compulsion, and volitional disclosure – not computational activity.
Criminal Context Matters – and Limits Heppner
Heppner also rests on a narrow criminal common‑law work‑product standard, not the broader protection available under Federal Rule of Civil Procedure 26(b)(3). That alone makes the case a poor fit for civil litigation, eDiscovery, and law‑firm AI governance more broadly.
The Takeaway for Practitioners
Heppner should be treated as an outlier, not a template. The better rule, which is already adopted elsewhere, is straightforward:
Using AI as a tool does not waive privilege or work‑product protection any
more than using email, cloud storage, or document‑review software does.
The law of privilege already knows how to deal with tools. The mistake is pretending the machine is something more.
This article was created by Martin Mayne, Avalon’s VP of eDiscovery