A MoFo Privacy Minute Q&A: Wiretapping Theories Reach AI Notetakers
Question: In light of recent litigation applying federal and state wiretapping theories to AI notetakers, what should providers and users of such tools know about these claims, and what measures should they take to mitigate litigation risk?
Answer: Plaintiffs have alleged that AI notetakers intercept and record online meeting participants’ communications without consent, violating state and federal wiretapping laws. To fend off such wiretapping claims, AI notetaker providers and users can take steps to ensure transparency and consent, such as by displaying in-meeting notices, among other measures. Additional information is below.
Background: How AI Notetakers Work
AI notetakers are automated tools designed to capture and summarize the content of virtual meetings. They typically join meetings as a participant or connect via an application programming interface (API) to access meeting audio and generate transcripts in real time. The tool converts speech to text, often using cloud-based automatic speech recognition technology.
In practice, the meeting host enables the AI notetaker for a meeting, and the tool begins working once participants join. Depending on the platform and configuration, participants may receive a notice or visual indicator that transcription is in progress, though the form and timing of that notice can vary. Some systems allow only the host or subscribing user to manage the AI notetaker’s settings, while others allow all participants to pause or stop the transcription. Many AI notetakers also provide post-meeting features such as automated summaries or searchable transcripts.
Wiretap Allegations Relating to AI Notetakers
The plaintiffs’ bar is starting to test the viability of wiretapping claims based on AI notetakers. For example, in In re Otter.AI Privacy Litigation, 5:25-cv-06911-EKL (N.D. Cal. Dec. 5, 2025), plaintiffs allege that AI notetaker Otter recorded, transcribed, and used the contents of plaintiffs’ conversations without their consent in violation of the federal Electronic Communications Privacy Act (ECPA) and the California Invasion of Privacy Act (CIPA), among other claims.
Specifically, plaintiffs contend that Otter unlawfully intercepted meeting participants’ communications by capturing their voices, storing the resulting recordings and other meeting content, and using that information to train its AI models without the consent of all parties to the conversation. According to the complaint, only the Otter account holder provided consent. Further, plaintiffs allege that Otter’s notetaker did not display any notice to participants of the conversation or link to a privacy policy disclosing that it trains its AI models on meeting notes. Instead, Otter allegedly shifted responsibility to its account holders to seek permission and obtain consent for Otter’s activities.
While ECPA, unlike CIPA, generally only requires the consent of one party (e.g., the Otter account holder), plaintiffs assert that the “crime-tort” exception applies, which eliminates one-party consent where an interception is alleged to further a criminal or tortious act. Here, plaintiffs contend that Otter intercepted communications with a tortious purpose (i.e., committing common law offenses of intrusion upon seclusion and conversion) by converting and using participants’ conversational data to train Otter’s automatic speech recognition and machine-learning systems for its own pecuniary gain.
Steps to Mitigate Risk of Wiretap Claims
Businesses and organizations that develop or use AI notetakers may be able to reduce litigation risk by prioritizing transparency, consent, and contractual safeguards.
- For AI notetaker providers:
- Provide clear notice of the tool. Before any recording begins, notify meeting participants that the tool is operating within the platform, such as by presenting a pop-up or banner when the tool joins the meeting or listing the tool as a meeting participant.
- Disclose how the tool operates. Describe if and how the tool will record and use participant audio and provide a link to a privacy policy with additional details, including whether any recordings or transcripts will be used for AI training and development.
- Offer real-time participant controls. Consider allowing any participant to pause or remove the AI notetaker in real time. If offered, providers should regularly test these controls to ensure they work as intended and are easy to use.
- For organizations using AI notetakers:
- Assess notice and consent flows. Before deploying an AI notetaker, confirm that the tool provides clear disclosures about its presence, and explore allowing meeting participants to request that the tool be turned off.
- Align internal practices with participant expectations. Ensure that organizational privacy notices and meeting participants’ expectations align with how the tool operates, including any downstream use of recordings or transcripts by your business and the provider.
- Manage provider relationships carefully. Execute a data processing agreement with the AI notetaker provider to ensure that any recordings, transcripts, and other meeting information are used solely to provide the notetaking service.
- Restrict secondary data use. Consider limiting the provider’s ability to use identifiable meeting data for any purpose, including AI development or training, and ensure that the provider deletes relevant data in a timely manner.
The Otter case is still in the early stages of litigation, and it remains to be seen how the court will rule on the merits of the plaintiffs’ wiretapping theories. The case will be an important one to watch, as courts grapple with how decades-old wiretapping statutes apply to modern technologies.
Purvi G. PatelPartner
Jonathan Louis NewmarkOf Counsel
Katherine WangAssociate
Practices