Artificial Intelligence Update
Generative AI and Privilege: Practical Lessons from Two Early Decisions and What Comes Next
In United States v. Heppner, the United States District Court for the Southern District of New York addressed both attorney-client privilege and work product protection where a financial services executive generated legal strategy materials using a generative AI tool without counsel’s direction. The court denied privilege or work product protection because the communications were not confidential under the platform’s terms, were not communications with an attorney for the purpose of obtaining legal advice, and were not prepared at the direction of counsel or reflective of attorney mental impressions.
In Warner v. Gilbarco, the United States District Court for the Eastern District of Michigan addressed work product protection in the context of a pro se litigant’s AI-assisted analysis after the close of discovery. The court held that the materials were protected because they reflected the non-attorney plaintiff’s own mental impressions prepared in anticipation of litigation where he effectively was serving as his own attorney. The court further concluded that disclosure to a public AI platform did not constitute waiver because it did not meaningfully increase the likelihood that the material would reach an adversary — and because, as the court emphasized, generative AI programs are “tools, not persons.”
Read together, the decisions reflect continuity in privilege doctrine. Both apply familiar analytical frameworks to new technology and remain within established doctrinal boundaries. Here, we discuss the practical implications of that continuity and identify where issues are most likely to arise in the future.
1. Inventory generative AI and evaluate the confidentiality, privacy, and data-handling terms that govern those tools.
A central driver of the court’s reasoning in Heppner was the absence of confidentiality. The court emphasized that if a generative AI platform’s privacy policy permits the collection, retention, training on, or disclosure of user inputs and outputs, there is no reasonable expectation of confidentiality.
This is not a novel concept. Communicating with an AI platform whose terms expressly permit use or disclosure of information arguably is functionally no different than speaking in the presence of a third party that has announced an intention to use what it hears. If anything, absent some future finding that communications with a personal AI assistant are different from communications with the public, it may present greater risk than the familiar example of speaking on a crowded train, where no one has affirmatively disclaimed confidentiality (and, in some cases, the AI terms disclose an intention to use the information for various purposes) but waiver concerns nevertheless arise.
In light of that reasoning, companies should consider identifying the universe of generative AI tools being used across their organizations and making decisions about what platforms may be used and how employees may use them. Organizations should consider reviewing the governing terms of service, privacy policies, licensing agreements, and data-retention provisions to determine whether inputs and outputs are treated as confidential, what security protections apply, and whether the provider retains rights to use or disclose that data.
Companies also should recognize that work-related use of publicly available or personal AI tools creates broader risks and may occur even if not formally authorized, or even if expressly prohibited, under company policies. Policies therefore should account for both approved enterprise systems and informal or “shadow” use and address the associated confidentiality risks. Companies also should consider appropriate, effective, and repeated training about following such policies and the risks inherent in failing to adhere to them.
2. Clarify that generative AI tools should not be used for legal analysis or strategy without approval from, and in collaboration with, counsel.
Companies long have instructed employees that discussions of legal strategy or litigation risk should occur in the presence of, or at the direction of, company counsel if privilege is to be maintained. Generative AI tools, at least for now, appear not to alter that principle. Organizations therefore should consider establishing clear guardrails prohibiting the use of generative AI platforms for legal strategy or legal analysis outside the involvement or direction of counsel.
The court’s analysis in Heppner reinforces this point. Setting aside whether a particular generative AI platform is confidential, Heppner makes clear that confidentiality alone is not sufficient to trigger attorney-client privilege or work product protection. Even the use of a confidential, sandboxed platform within an enterprise does not automatically cloak discussions of legal theories or strategy with privilege. Such content may fail other core elements of privilege — for example, it may not be a communication between a client and an attorney or may not be made for the purpose of obtaining legal advice. In this respect, entering a prompt into a commercial, non-sandboxed AI platform would seem to be little different than running a search on an internet search engine, which few would assert is a request for legal advice.
With respect to work product, Heppner and Warner confirm a well-established point: work product protection varies by jurisdiction. In some courts, protection turns on whether materials were prepared at the direction of counsel and reflect attorney mental impressions. In others, the focus is on whether disclosure materially increases the likelihood that the material will come into the hands of an adversary. Under those standards, work product protection may extend to materials prepared by non-attorneys, including pro se litigants, so long as the materials reflect mental impressions prepared in anticipation of or in connection with litigation. Generative AI does not alter these differences; it simply places them in sharper focus.
Notably, the Heppner court left open the possibility that confidential, counsel-directed use of a generative AI platform would be analyzed differently under traditional agency principles. The opinion expressly suggested that had counsel directed the exchange, the platform might have functioned as a lawyer’s agent. Where an AI tool is used within a confidential environment, at the direction of counsel, and for the purpose of providing or obtaining legal advice, those circumstances may align more closely with the traditional elements of attorney-client privilege and work product protection. Neither Heppner nor Warner call that conclusion into question. And, to the extent Warner treated the pro se litigant as the attorney, their analysis can be harmonized. Accordingly, counsel may wish to address generative AI use as part of initial discussions with clients about privilege considerations, including clarifying when and how such tools may be used in connection with legal matters.
3. Consider whether existing litigation hold and preservation protocols adequately account for AI-generated materials, including prompts, outputs, and related metadata.
These decisions serve as a reminder that AI-generated materials may become discoverable once a dispute is underway. The Heppner court treated those materials like documents created outside the presence or without the involvement of counsel. Although Warner upheld work product protection, its holding rests on its specific procedural posture, user role, and factual record. Organizations therefore should consider whether litigation hold notices, training materials, and retention and collection procedures adequately address AI-related materials.
For example, depending on the platform, relevant materials may include user prompts, generated outputs, or other related records to the extent such materials are retained and reasonably accessible. In enterprise environments, questions may arise regarding where and how such data is stored, how long it is retained, and whether it is technically retrievable without undue burden. As with other forms of electronically stored information, whether and to what extent such materials should be preserved or collected will depend on the specific facts, system architecture, accessibility, and proportionality considerations applicable in the particular situation.
Where litigation is reasonably anticipated, routine deletion or overwriting of relevant reasonably accessible AI-generated materials may need to be suspended in a similar manner as other electronically stored information. Incorporating AI tools into preservation protocols at the outset may reduce the risk of later accusations of spoliation, incompleteness, or inconsistent retention practices.
II. What Comes Next: Emerging Questions and Considerations
As noted above, these decisions reflect the continued application of established privilege and work product principles to new factual scenarios involving generative AI. Neither opinion creates new discovery rules or categorical obligations. At the same time, many other courts have yet to weigh in, and best practices will continue to evolve. Against that backdrop, some emerging questions are worth considering.
1. Privilege implications beyond the attorney-client and work product contexts
The doctrinal analysis raises the possibility that similar issues may arise in connection with privileges beyond the attorney-client and work product doctrines. As AI tools are increasingly used in other professional and personal settings, courts may be asked to consider how generative AI affects the application of spousal privilege, therapist-patient privilege, or clergy privilege. Neither decision addressed those questions directly, but the underlying reasoning suggests that traditional elements of confidentiality and agency would continue to shape the analysis. As such tools become more embedded in professional and personal settings, their interaction with other privilege doctrines may present additional nuances not yet addressed.
2. Broader confidentiality and risk management implications
Beyond privilege doctrines, organizations should consider generative AI use within a broader risk management framework addressing confidentiality, privacy, intellectual property ownership, contractual rights, and data governance. The privilege analysis represents only one dimension of potential exposure resulting from different levels and expectations of confidentiality. Even where privilege is not implicated, the use of AI tools may raise separate concerns regarding data security, ownership of outputs, regulatory compliance, and internal governance. A coordinated approach that integrates privilege considerations with broader confidentiality and information-management policies can enhance enterprise-wide risk management and strengthen governance in a rapidly evolving technological environment.
3. Conceptual framing and further evolution of AI
The way each court characterizes AI may shape future arguments, and this characterization provides the most difficult area in which to reconcile the two cases. In Warner, the court described the AI model at issue expressly as a “tool, not a person” in the course of its work product waiver analysis and chose not to consider whether there are real humans that access the information beyond the tool. Because the governing inquiry was whether disclosure materially increased the likelihood that the material would reach an adversary, the court treated the platform as an instrument rather than a recipient (and, implicitly held that the people behind the instrument were not conduits to adversaries).
Heppner, however, suggested a different framing in a different context. The court noted that had counsel directed the use of the AI platform, it might have functioned “in a manner akin to a highly trained professional” acting as an agent of the lawyer. That language situates AI not merely as a neutral instrument but as something that could, under certain conditions, be likened to a human and operate within traditional agency principles.
These characterizations reflect distinct conceptual lenses that could become very important in different contexts: AI as tool versus AI as agent-like assistant. As generative AI systems become more autonomous and more embedded in litigation workflows, future courts may clarify how those characterizations intersect with waiver, agency, and privilege formation doctrines.
4. Evidentiary and doctrinal questions
Although these decisions are important developments in the discovery context, they do not resolve how courts will address AI-generated materials at later stages of litigation. Courts are likely to confront questions such as these:
- Should the prompts created by a party be treated differently from AI-generated outputs?
- How should the reliability of AI-generated content be evaluated under existing evidentiary standards, especially as models improve at an exponential pace?
- Do traditional hearsay principles provide an appropriate analytical framework as AI systems arguably become more humanlike, or will courts continue to treat AI systems solely as “tools”?
- Should juries be permitted to hear AI-generated legal analysis? Under what safeguards, in light of the risk of confusion or prejudice?
- How should AI-generated output be compared to expert testimony, especially in light of Heppner’s observation that, if directed by counsel, an AI tool might “[function] in a manner akin to a highly trained professional”?
Practitioners should expect continued development in this area as generative AI becomes more embedded in personal and professional settings.
Attorney Advertising—Sidley Austin LLP is a global law firm. Our addresses and contact information can be found at www.sidley.com/en/locations/offices.
Sidley provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship. Readers should not act upon this information without seeking advice from professional advisers. Sidley and Sidley Austin refer to Sidley Austin LLP and affiliated partnerships as explained at www.sidley.com/disclaimer.
© Sidley Austin LLP
Contacts

Offices
Capabilities
Suggested News & Insights
- Stay Up To DateSubscribe to Sidley Publications
- Follow Sidley on Social MediaSocial Media Directory



