You ran your legal exposure through an AI chatbot. You organized your thinking, mapped out a defense strategy, even drafted some arguments. Then you sent the whole package to outside counsel. You assumed it was all wrapped in attorney-client privilege.
Judge Jed S. Rakoff disagrees.
On February 17, 2026, the Southern District of New York issued a written opinion in United States v. Heppner, No. 1:25-cr-00503, denying privilege protection for documents that defendant Bradley Heppner generated using the consumer version of Anthropic’s Claude before forwarding them to his lawyer. Judge Rakoff called it “a question of first impression nationwide.” He answered it clearly: neither attorney-client privilege nor the work product doctrine covered the materials.
This ruling matters far beyond the criminal case it came from. If you’re in-house counsel at a tech company and your company uses ChatGPT, Claude, or any other public AI platform to think through a legal issue, you need to read this carefully.
1. What Happened in Heppner
Bradley Heppner was the former CEO and Chairman of GWG Holdings, a publicly traded company. After federal agents executed a search warrant at his residence — seizing electronic devices — they found approximately thirty-one documents Heppner had generated by feeding queries into Claude, which he used to outline his defense strategy and map potential legal arguments. He later shared those documents with his lawyer.
Heppner moved to suppress the documents as privileged. The government moved to compel. Judge Rakoff sided with the government.
2. The Attorney-Client Privilege Problem
The attorney-client privilege attaches to confidential communications between an attorney and a client, made for the purpose of obtaining legal advice. See Upjohn Co. v. United States, 449 U.S. 383 (1981).
Judge Rakoff started with an obvious point: Claude is not an attorney. The communication wasn’t between Heppner and his lawyer — it was between Heppner and a large language model. For Judge Rakoff, that alone was disqualifying.
But Judge Rakoff went further. He pointed to Anthropic’s own privacy policy, which informed users that their inputs and outputs may be used to train the model and disclosed to regulatory authorities and third parties. There was, the court held, no reasonable expectation of confidentiality. The moment Heppner typed his thoughts into a public AI platform, he was effectively broadcasting them to a third party that made no promise of secrecy.
Further, Judge Rakoff ruled that Heppner did not communicate with Claude for the purpose of obtaining legal advice. Critically, he did not act at the suggestion or direction of counsel, which in Judge Rakoff’s view would have made a difference.
The fact that he later sent the documents to a lawyer didn’t change the analysis. Privilege doesn’t apply retroactively to documents that were never confidential to begin with.
3. The Work Product Problem
The work product doctrine is broader. Under Hickman v. Taylor, 329 U.S. 495 (1947) and Federal Rule of Civil Procedure 26(b)(3), materials prepared in anticipation of litigation receive protection from disclosure — even absent an attorney’s involvement in their creation.
Heppner argued his Claude-generated documents were work product because he prepared them knowing he’d soon be litigating. The court wasn’t persuaded, finding that the documents reflected Heppner’s thinking — not any attorney’s mental impressions, strategies, or legal conclusions. Work product at its strongest protects the attorney’s thought process and, for Rakoff, Heppner’s self-directed AI research didn’t come close.
4. The Practical Playbook for In-House Counsel
So what’s the lesson here? It’s not that “AI is never privileged” or that your company shouldn’t use AI. It’s that AI use needs the right governance framework. That means the legal team understands how AI is being used and defines clear rules for what is and is not acceptable.
Four things worth doing in the next week:
- Audit your company’s AI use habits. Are you (or your team) running legal analysis, regulatory exposure, or litigation prep through consumer AI tools? Are any non-legal teams? Note which platforms, under what circumstances. Flag any materials that could be subject to future discovery.
- Check your AI provider’s privacy policy. Anthropic’s consumer-version terms allowed for data use in model training, a crucial fact for Judge Rakoff. Many enterprise-tier contracts are different. If you’re using a public-facing, free-tier AI product for legal work, assume no confidentiality.
- Route sensitive AI-assisted work through counsel. For anything touching active or anticipated litigation — including regulatory investigations, employment disputes, or deal disputes — use AI tools under attorney direction. Have outside counsel retain and direct the AI use, and ensure that any internal AI tools are used at the direction of the legal department only.
- Draft an internal AI policy. This is a gap most companies still haven’t closed. The policy should distinguish between (a) public consumer AI tools (restricted for any work with confidential or privileged information); and (b) enterprise-tier AI tools with appropriate confidentiality and data processing terms (usable under defined protocols). It should also define what information is and is not appropriate to put into an AI tool, including material that is privileged or work product.
Further Reading
- United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 17, 2026)
- Upjohn Co. v. United States, 449 U.S. 383 (1981)
- Hickman v. Taylor, 329 U.S. 495 (1947)
- Fed. R. Civ. P. 26(b)(3) (work product protection)
This post is for general informational use only. This is not legal advice and does not form an attorney-client relationship. For any specific situation, you should seek out legal representation and counsel. Portions of this blog may constitute attorney advertising. Any testimonial or endorsement on this profile does not constitute a guarantee, warranty, or prediction regarding the outcome of your legal matter. Prior results do not guarantee a similar outcome. Results depend upon a variety of factors unique to each representation.


