Question: In light of recent litigation applying federal and state wiretapping theories to AI notetakers, what should providers and users of such tools know about these claims, and what measures should they take to mitigate litigation risk?
Answer: Plaintiffs have alleged that AI notetakers intercept and record online meeting participants’ communications without consent, violating state and federal wiretapping laws. To fend off such wiretapping claims, AI notetaker providers and users can take steps to ensure transparency and consent, such as by displaying in-meeting notices, among other measures. Additional information is below.
AI notetakers are automated tools designed to capture and summarize the content of virtual meetings. They typically join meetings as a participant or connect via an application programming interface (API) to access meeting audio and generate transcripts in real time. The tool converts speech to text, often using cloud-based automatic speech recognition technology.
In practice, the meeting host enables the AI notetaker for a meeting, and the tool begins working once participants join. Depending on the platform and configuration, participants may receive a notice or visual indicator that transcription is in progress, though the form and timing of that notice can vary. Some systems allow only the host or subscribing user to manage the AI notetaker’s settings, while others allow all participants to pause or stop the transcription. Many AI notetakers also provide post-meeting features such as automated summaries or searchable transcripts.
The plaintiffs’ bar is starting to test the viability of wiretapping claims based on AI notetakers. For example, in In re Otter.AI Privacy Litigation, 5:25-cv-06911-EKL (N.D. Cal. Dec. 5, 2025), plaintiffs allege that AI notetaker Otter recorded, transcribed, and used the contents of plaintiffs’ conversations without their consent in violation of the federal Electronic Communications Privacy Act (ECPA) and the California Invasion of Privacy Act (CIPA), among other claims.
Specifically, plaintiffs contend that Otter unlawfully intercepted meeting participants’ communications by capturing their voices, storing the resulting recordings and other meeting content, and using that information to train its AI models without the consent of all parties to the conversation. According to the complaint, only the Otter account holder provided consent. Further, plaintiffs allege that Otter’s notetaker did not display any notice to participants of the conversation or link to a privacy policy disclosing that it trains its AI models on meeting notes. Instead, Otter allegedly shifted responsibility to its account holders to seek permission and obtain consent for Otter’s activities.
While ECPA, unlike CIPA, generally only requires the consent of one party (e.g., the Otter account holder), plaintiffs assert that the “crime-tort” exception applies, which eliminates one-party consent where an interception is alleged to further a criminal or tortious act. Here, plaintiffs contend that Otter intercepted communications with a tortious purpose (i.e., committing common law offenses of intrusion upon seclusion and conversion) by converting and using participants’ conversational data to train Otter’s automatic speech recognition and machine-learning systems for its own pecuniary gain.
Businesses and organizations that develop or use AI notetakers may be able to reduce litigation risk by prioritizing transparency, consent, and contractual safeguards.
The Otter case is still in the early stages of litigation, and it remains to be seen how the court will rule on the merits of the plaintiffs’ wiretapping theories. The case will be an important one to watch, as courts grapple with how decades-old wiretapping statutes apply to modern technologies.