Privilege in the Age of AI: SDNY Holds AI-Generated Documents Are Not Privileged

20 Feb 2026
Client Alert

Protecting privilege while using AI is not as straightforward as you might think. A new decision out of the Southern District of New York establishes that the use of free or public large language models to create documents—without having been directed by counsel—may not be protected by attorney-client privilege or the work product doctrine, even if they are later shared with counsel. United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), Dkt. No. 27 (“Mem.”).

In Heppner, the court denied protection for AI-generated materials, emphasizing three facts:

(1) the defendant used a publicly available AI tool with terms that defeated any reasonable expectation of confidentiality; (2) he acted without counsel’s direction; and (3) the materials did not reflect counsel’s mental impressions when created. Heppner, Mem. at 6–7, 9–10; United States v. Heppner, No. 25-cr-00503-JSR, Tr. of Feb. 10, 2026 Conference at 3, 5–6 (S.D.N.Y. Feb. 10, 2026) (“Tr.”).

Practical Steps to Reduce Privilege Risk in AI-Assisted Work

In light of this decision, organizations using generative AI in connection with sensitive legal matters should consider the following:

  • Privilege cannot be created post hoc: Creating AI-assisted analyses or drafts independently, and only later providing them to counsel, generally will not ensure attorney-client privilege or work product protection.
  • Public AI tools should be treated as third parties by default: If terms permit provider access, training use, or disclosure, entering sensitive facts or legal theories risks waiver of attorney-client privilege.
  • Enterprise tools should be used for sensitive matters: Contractual terms should include limits on training use, disclosure, retention, and access, along with security controls and auditability.
  • Direction by counsel is critical: To argue privilege over generative AI work, there must be clear counsel-directed engagement and an expectation of confidentiality backed by documentation evidencing that the work was commissioned for legal advice.
  • Privilege logs should be prepared carefully: Descriptions of AI outputs must reflect counsel involvement and litigation focus (not just the substantive content) to support privilege claims.
  • Evolving judicial approaches should be monitored: Courts are actively shaping how AI interacts with procedural and ethical regimes. Practitioners should stay apprised of local rules and standing orders governing AI disclosure and certification requirements.

Background on the Case

After executing a search warrant, law enforcement seized electronic devices containing approximately 31 documents memorializing the defendant’s communications with a generative AI platform. Heppner, Mem. at 3. Defense counsel represented that defendant used the AI tool to prepare “reports” outlining potential defense strategies after receiving a grand jury subpoena—without counsel’s direction. Id. The defendant asserted attorney-client privilege and work product protection, arguing that the documents were created to facilitate discussions with counsel and were later shared with counsel. Id. at 3–4. The court rejected both claims. Id. at 1, 4; Tr. at 6.

Key Holdings

Attorney-Client Privilege

To qualify for attorney-client privilege, a communication must be made (1) between a client and his or her attorney (2) that is intended to be, and in fact was, kept confidential (3) for the purpose of obtaining or providing legal advice. The court concluded the AI documents lacked “at least two, if not all three” required elements. Heppner, Mem. at 5.

First, the court concluded that the AI tool was not an attorney, and communications with it were not attorney-client communications. Id.

Second—and most significant for organizations evaluating the use of AI tools—the court held that the communications were not confidential because the defendant used a publicly available third-party AI platform with a written privacy policy that put users on notice that the provider collected both user inputs and tool outputs, used the data to train the tool, and reserved the right to disclose the data to third parties, including government authorities. Id. at 6–7. Accordingly, the court found no reasonable expectation of confidentiality. Id. at 7. In effect, the court treated the public AI platform as a third party for privilege purposes, and the governing contractual terms were central to the confidentiality analysis.

The court’s reasoning suggests that enterprise or private-instance tools with contractual safeguards may strengthen confidentiality arguments, but safeguards alone are unlikely to suffice absent clear counsel direction and litigation purpose.

Third, the court rejected the idea that later sharing the AI-created documents with counsel could retroactively create privilege. Even if the defendant intended to share the communications with counsel and ultimately did so, non-privileged communications are not “alchemically changed” into privileged ones merely because they are later transferred to counsel. Id. at 8.

Work Product Doctrine

The court likewise rejected work product protection. Although the documents may have been created in anticipation of litigation, the defendant generated them independently and they did not reflect counsel’s strategy at the time. Heppner, Tr. at 5; Mem. at 9–10. Because the defendant was not acting as counsel’s agent, the materials were not counsel-directed litigation preparation. Heppner, Mem. at 10. While work product protection can extend to non-lawyers, its purpose is to protect “the mental processes of the attorney,” not a client’s independently generated strategy analysis. Id. at 11.

Main Takeaways

The Heppner decision highlights several core considerations for organizations structuring AI use for sensitive legal matters:

  1. Courts may treat publicly available generative AI platforms as third parties for privilege purposes, especially where the terms permit the provider to access, train on, or disclose users’ inputs and outputs, because such terms can defeat any reasonable expectation of confidentiality. Organizations and individuals should consider requiring that any litigation-related AI usage is limited to enterprise or private-instance deployments with contractual safeguards.  
  2. Privilege claims are harder to sustain where AI use is not directed by counsel and is instead undertaken independently by business personnel or individuals before or outside counsel’s direction. Where AI will be used for litigation-related analysis or drafting, organizations and individuals should consider requiring counsel to initiate and document the assignment so the activity is clearly counsel-directed.
  3. Work product protection remains centered on counsel’s mental strategies and counsel-directed work, not a defendant’s independent strategy development, even if later shared with counsel. To strengthen work product claims, organizations should route litigation strategy analysis through counsel and preserve a clear record that any AI-assisted drafts or analyses were prepared at counsel’s direction and reflect counsel’s legal theories or mental impressions, rather than standalone business-generated strategy documents.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.