Algorithms as Exhibit A: Practical Steps for Navigating Discovery of AI Evidence in Workplace Disputes
AI is increasingly becoming entrenched in the modern workplace. Employers are using AI tools for everything from screening resumes to drafting performance reviews. Managers are also increasingly relying on AI to prepare employee documentation, discuss employee situations, and, in some cases, seek legal advice. Employees are also using AI tools to vet claims against their employers.
Without a lawyer in the loop, all of these use cases could create potentially discoverable evidence in workplace disputes. Two recent cases in United States v. Heppner and Warner v. Gilbarco offer an early roadmap for how courts are approaching attorney-client privilege and attorney work product protections in discovery of AI-generated materials. While these courts have not applied any new rules on privilege or work product, they underscore the risk of AI generated materials potentially becoming bad evidence without proper guardrails.
So far, discovery of AI evidence in workplace disputes is not widespread and is still being developed through court precedent. Both sides of the bar, however, are actively thinking about how to obtain AI evidence in discovery for employment claims. Not only will employers need to consider how this might play out when seeking and defending discovery requests, but they should also consider adopting good governance and controls to mitigate the risk of potentially problematic AI evidence becoming Exhibit A in their next workplace dispute.
United States v. Heppner
In United States v. Heppner, a criminal defendant used a generative AI tool, Claude, to prepare reports analyzing potential charges and outlining possible defense strategies. He did this independently, without the direction of counsel, and later shared the AI-generated materials with his attorney.
After learning of Heppner’s AI use, the government requested those materials in discovery. Heppner’s attorney argued that they were protected by the attorney-client privilege and work product doctrine. The court rejected both arguments.
- Attorney-Client Privilege: The court found that the materials were not privileged. First, the court found that the AI platform was not an attorney. Second, the court found that the materials could not be privileged because Heppner used a non-confidential AI tool. The court highlighted the AI provider’s terms of service that expressly disclaimed the provision of legal advice and reserved the right to use and disclose user data. Accordingly, the court found Heppner lacked a reasonable expectation of confidentiality for the communications to be privileged. Third, the court found that Heppner did not use the AI tools for purposes of obtaining legal advice since he did not do so at the direction of counsel. The court also rejected the idea that sharing the materials with counsel after the fact could “alchemically” transform them into privileged communications.
- Attorney Work Product: The court emphasized that the attorney work product doctrine exists to protect the mental impressions and strategic thinking of attorneys. Although work product can extend to materials prepared by non-lawyers, the protection depends on whether the material reflects counsel’s strategy or was prepared at counsel’s direction. Here, the defendant acted on his own initiative. The AI-generated reports may have influenced later discussions with counsel, but they did not reflect counsel’s thought processes at the time they were created. That distinction proved decisive to the court’s conclusions.
Warner v. Gilbarco
In Warner v. Gilbarco, a pro se employment plaintiff used ChatGPT to assist with drafting legal documents and for litigation strategy. The defendant sought to compel production of all materials related to her use of AI in connection with the lawsuit.
The court, however, found that the AI-generated materials were protected by the attorney work product privilege because it reflected plaintiff’s mental impressions, drafting process, and litigation strategy. Because plaintiff was essentially acting as her own lawyer, the court found that those materials constituted work product. The court also characterized generative AI platforms as “tools, not persons,” and distinguished work product waiver from attorney-client privilege waiver. Because there had been no disclosure to an adversary or in a manner likely to place the materials in an adversary’s hands, the work product protection remained intact. Notably, the court did not address whether plaintiff had used a non-confidential version of ChatGPT that the Heppner court had found could result in a waiver.
Lessons Learned from Heppner and Gilbarco
Heppner and Gilbarco show that, so far, courts are treating AI-generated materials as ordinary evidence subject to ordinary discovery rules. Courts are applying familiar rules for attorney-client privilege, work product doctrine, and waiver to the new technological realities of AI.
The lessons learned from Heppner and Gilbarco are:
- Using AI without lawyer involvement or direction may cause those materials to not be shielded from discovery under the attorney-client privilege or work product doctrine;
- Sending those AI materials later to counsel does not automatically cloak them with privilege; and
- Creating AI materials on public versions of generative AI models may result in a waiver of attorney-client privilege or work product protections.
Implications for Workplace Disputes
The realities of how managers, HR, compliance, and employees use AI could have immediate consequences in workplace disputes. Most individuals treat AI as a private sounding board, seeking advice, testing their thinking, or generating ideas. People are also often more candid in prompts than they would be in email, in Teams chats, or on other electronic platforms.
When it comes to employment disputes, this can create problematic evidence if not carefully managed. Employment cases can turn on intent, documentation, and consistency. Those are precisely the areas where AI prompts or outputs could create real problems.
For example, imagine the following evidence coming out in discovery:
- A manager asking ChatGPT, “How do I terminate a pregnant employee without getting sued for discrimination?”;
- HR feeding an AI platform confidential details about an internal investigation into an employee’s retaliation and discrimination complaint to evaluate the company’s exposure under Title VII, without legal’s direction; or
- After receiving an EEOC charge and before contacting counsel, a senior executive providing AI with potential concerns the company has about the claims and asking AI to evaluate the strengths and weaknesses of defending the claim.
Without a lawyer involved, all of these prompts could potentially be discoverable and might create problematic evidence when defending employment claims. For example, even if the manager had legitimate, non-discriminatory grounds for terminating the pregnant employee, merely asking how to terminate her without getting sued could arguably be used to show pretext or efforts to mask unlawful bias.
Even when legal is involved, privileged advice could potentially be waived if it is run through AI systems that do not have confidentiality protections. Imagine an executive who takes outside counsel’s advice and seeks to test it against ChatGPT. If done on a public version, according to the rationale in Heppner, it could potentially waive privilege.
Employees too often use AI to vet their claims before they go to lawyers. The same risks that apply on the employer side could also be present for employees pursuing claims. For example, consider an employee asking ChatGPT “How can I bring a retaliation claim against my employer?” or “Can you give me a playbook to pursue a discrimination claim against my manager?” Although those might not be case dispositive if produced in discovery, this evidence could be used to, among other things: (1) show plaintiff was trying to concoct a claim; (2) test plaintiff’s credibility; or (3) lead to other discoverable evidence.
The New Frontier—Discovery of AI Evidence
Because plaintiff’s counsel are increasingly attuned to AI use in the workplace, employers should prepare for that when defending employment litigation. Employers should expect to increasingly see discovery requests seeking various types of AI evidence, including:
- Generative AI prompts and outputs related to the plaintiff’s claims;
- AI-generated performance reviews or disciplinary drafts related to plaintiff; and
- Meeting recordings, transcripts, and summaries created by AI assistants related to plaintiff.
In depositions, employers should expect managers to be asked about their use of generative AI tools, particularly any use relating to plaintiff.
All this means that preparation will be key. Defense counsel will need to consider what AI evidence might exist on the front end, including in claims investigations, document preservation and collection, and discovery responses. Company-side deponents will also need to be prepared to answer specific questions about their use of AI to elicit potential discoverable evidence.
On the other hand, employers defending employment claims should consider their own discovery requests related to plaintiffs’ use of AI, including whether plaintiffs used AI tools to draft complaints, declarations, timelines, or damages analyses. In some cases, AI-generated materials could reveal inconsistencies, shifting theories, or assessments of plaintiff’s case strengths and weaknesses.
Courts, however, will likely apply traditional Rule 26 principles of relevance, burden, and proportionality to these cases. Courts are unlikely to permit fishing expeditions disconnected from relevance or overly broad requests that create burden or proportionality issues. AI discovery requests, whether directed at employers or plaintiffs, should be narrowly tailored and tied to concrete claims or defenses.
Practical Steps for Mitigating Risk of Bad AI Evidence
Although the danger of creating bad evidence is real, the answer is not to abandon AI. That is no longer realistic or feasible for many businesses.
The answer, instead, is having good governance and controls for AI. Employers should consider implementing the following risk-mitigation strategies that will allow them to harness AI’s benefits without turning algorithms into Exhibit A.
- Inventory AI Tools: Employers should create a comprehensive inventory of AI tools in use across the organization. That includes understanding what AI tools are used in HR processes (e.g., resume screening), enterprise generative AI platforms, AI recording and transcription assistants, and shadow or personal AI use for company matters. Employers cannot manage discovery risk if they do not know where the data are stored.
- Determine AI Usage Policies: AI usage policies are critical for companies to set the “rules of the road” on AI use. These policies should, among other things, define when AI may be used, when it may not be used, and what types of information may be entered into such tools. Those policies should also prohibit:
- Managers, HR, and compliance employees who are not lawyers performing legal functions from seeking legal advice from AI without involving internal or external counsel;
- Restrict discussing company business, including employee situations, on non-company approved AI tools; and
- Not putting legally privileged communications or work product into generative AI without attorney involvement or direction.
- Train Managers, HR, and Compliance: Training should emphasize that AI prompts and outputs are potentially discoverable, that privilege does not automatically attach to AI use, and that sensitive legal or employment analyses should be routed through counsel. Consider providing specific training for HR, compliance, and managers on creating proper AI prompts to avoid creating potentially bad evidence. A simple rule of thumb: if you would not want the prompt read aloud in a deposition, do not type it into AI.
- Identify AI Recording Risks: AI recording tools can create records of conversations that might otherwise have existed only in notes. Audio files, transcripts, summaries, and metadata can all become discoverable. Organizations should adopt clear guidelines regarding when recording is appropriate and when it should be disabled—particularly during privileged discussions involving legal strategy. Where recording is necessary, consider implementing access controls and segregation protocols to reduce inadvertent disclosure risk.
- Update Retention Policies: AI tools can generate vast volumes of data. Resume scanners may create scores, rankings, notes, and metadata for tens of thousands of applicants. AI recording assistants may generate transcripts and summaries for hundreds of conversations annually. Retention policies should align with recordkeeping requirements under existing employment laws and new AI laws while avoiding unnecessary indefinite storage of AI data or materials that are not legally required to be maintained.
- Assess Litigation Hold Practices: Consider updating litigation hold templates to expressly reference holding relevant AI-related materials, such as prompts, outputs, recordings, and audit logs. Employers should also consider asking relevant custodians to identify any personal or non-approved AI tools used to discuss claims or defenses, so it can assess whether there might be additional evidence to hold. Close coordination among legal, HR, IT, and vendors is essential to ensure that auto-deletion settings can be suspended when necessary and that AI data can be preserved and exported in defensible and usable formats.
Andrew R. TurnbullPartner
Practices