AI is increasingly becoming entrenched in the modern workplace. Employers are using AI tools for everything from screening resumes to drafting performance reviews. Managers are also increasingly relying on AI to prepare employee documentation, discuss employee situations, and, in some cases, seek legal advice. Employees are also using AI tools to vet claims against their employers.
Without a lawyer in the loop, all of these use cases could create potentially discoverable evidence in workplace disputes. Two recent cases in United States v. Heppner and Warner v. Gilbarco offer an early roadmap for how courts are approaching attorney-client privilege and attorney work product protections in discovery of AI-generated materials. While these courts have not applied any new rules on privilege or work product, they underscore the risk of AI generated materials potentially becoming bad evidence without proper guardrails.
So far, discovery of AI evidence in workplace disputes is not widespread and is still being developed through court precedent. Both sides of the bar, however, are actively thinking about how to obtain AI evidence in discovery for employment claims. Not only will employers need to consider how this might play out when seeking and defending discovery requests, but they should also consider adopting good governance and controls to mitigate the risk of potentially problematic AI evidence becoming Exhibit A in their next workplace dispute.
In United States v. Heppner, a criminal defendant used a generative AI tool, Claude, to prepare reports analyzing potential charges and outlining possible defense strategies. He did this independently, without the direction of counsel, and later shared the AI-generated materials with his attorney.
After learning of Heppner’s AI use, the government requested those materials in discovery. Heppner’s attorney argued that they were protected by the attorney-client privilege and work product doctrine. The court rejected both arguments.
In Warner v. Gilbarco, a pro se employment plaintiff used ChatGPT to assist with drafting legal documents and for litigation strategy. The defendant sought to compel production of all materials related to her use of AI in connection with the lawsuit.
The court, however, found that the AI-generated materials were protected by the attorney work product privilege because it reflected plaintiff’s mental impressions, drafting process, and litigation strategy. Because plaintiff was essentially acting as her own lawyer, the court found that those materials constituted work product. The court also characterized generative AI platforms as “tools, not persons,” and distinguished work product waiver from attorney-client privilege waiver. Because there had been no disclosure to an adversary or in a manner likely to place the materials in an adversary’s hands, the work product protection remained intact. Notably, the court did not address whether plaintiff had used a non-confidential version of ChatGPT that the Heppner court had found could result in a waiver.
Heppner and Gilbarco show that, so far, courts are treating AI-generated materials as ordinary evidence subject to ordinary discovery rules. Courts are applying familiar rules for attorney-client privilege, work product doctrine, and waiver to the new technological realities of AI.
The lessons learned from Heppner and Gilbarco are:
The realities of how managers, HR, compliance, and employees use AI could have immediate consequences in workplace disputes. Most individuals treat AI as a private sounding board, seeking advice, testing their thinking, or generating ideas. People are also often more candid in prompts than they would be in email, in Teams chats, or on other electronic platforms.
When it comes to employment disputes, this can create problematic evidence if not carefully managed. Employment cases can turn on intent, documentation, and consistency. Those are precisely the areas where AI prompts or outputs could create real problems.
For example, imagine the following evidence coming out in discovery:
Without a lawyer involved, all of these prompts could potentially be discoverable and might create problematic evidence when defending employment claims. For example, even if the manager had legitimate, non-discriminatory grounds for terminating the pregnant employee, merely asking how to terminate her without getting sued could arguably be used to show pretext or efforts to mask unlawful bias.
Even when legal is involved, privileged advice could potentially be waived if it is run through AI systems that do not have confidentiality protections. Imagine an executive who takes outside counsel’s advice and seeks to test it against ChatGPT. If done on a public version, according to the rationale in Heppner, it could potentially waive privilege.
Employees too often use AI to vet their claims before they go to lawyers. The same risks that apply on the employer side could also be present for employees pursuing claims. For example, consider an employee asking ChatGPT “How can I bring a retaliation claim against my employer?” or “Can you give me a playbook to pursue a discrimination claim against my manager?” Although those might not be case dispositive if produced in discovery, this evidence could be used to, among other things: (1) show plaintiff was trying to concoct a claim; (2) test plaintiff’s credibility; or (3) lead to other discoverable evidence.
Because plaintiff’s counsel are increasingly attuned to AI use in the workplace, employers should prepare for that when defending employment litigation. Employers should expect to increasingly see discovery requests seeking various types of AI evidence, including:
In depositions, employers should expect managers to be asked about their use of generative AI tools, particularly any use relating to plaintiff.
All this means that preparation will be key. Defense counsel will need to consider what AI evidence might exist on the front end, including in claims investigations, document preservation and collection, and discovery responses. Company-side deponents will also need to be prepared to answer specific questions about their use of AI to elicit potential discoverable evidence.
On the other hand, employers defending employment claims should consider their own discovery requests related to plaintiffs’ use of AI, including whether plaintiffs used AI tools to draft complaints, declarations, timelines, or damages analyses. In some cases, AI-generated materials could reveal inconsistencies, shifting theories, or assessments of plaintiff’s case strengths and weaknesses.
Courts, however, will likely apply traditional Rule 26 principles of relevance, burden, and proportionality to these cases. Courts are unlikely to permit fishing expeditions disconnected from relevance or overly broad requests that create burden or proportionality issues. AI discovery requests, whether directed at employers or plaintiffs, should be narrowly tailored and tied to concrete claims or defenses.
Although the danger of creating bad evidence is real, the answer is not to abandon AI. That is no longer realistic or feasible for many businesses.
The answer, instead, is having good governance and controls for AI. Employers should consider implementing the following risk-mitigation strategies that will allow them to harness AI’s benefits without turning algorithms into Exhibit A.