Picture this. A human resources manager opens a video call with an employee to discuss a sensitive performance issue. She has prepared carefully. This is the kind of conversation that requires candor, discretion, and trust. Thirty seconds after the call begins, a small notification appears in the corner of the screen: “Otter.ai has joined the meeting.”
The employee did not ask permission. No one approved it. The meeting is now being transcribed in real time, the transcript stored on a third-party server, processed by an AI model the company has no contract with, and retained indefinitely in an account that IT has no visibility into. Every word of a sensitive employment conversation, the kind that could become Exhibit A in a wrongful termination claim, is now a permanent record sitting outside the organization’s control.
Or consider the inverse scenario, arguably more dangerous: the employee opens their phone under the desk, activates an AI transcription app, and begins recording the manager’s feedback without saying a word. No notification pops up. No one knows. The recording captures statements about performance ratings, compensation decisions, and assessments of the employee’s future at the company. It sits in the employee’s personal account, timestamped, searchable, and fully discoverable in litigation.
If either of those scenarios sounds implausible, consider how many meetings you have attended in the last six months where someone’s AI note-taker joined uninvited. The technology is ubiquitous, the legal exposure is real, and the vast majority of organizations have not yet built the governance structures to address it.
The First Wave of AI Governance Was the Easy Part
Over the past two years, most serious organizations have done the foundational work of AI governance. Cross-functional committees were formed. AI use policies were drafted. Employees were trained not to paste sensitive data into public AI tools. The guidance was clear enough: do not feed proprietary information, personal data, or confidential documents into free generative AI tools that may use inputs for model training.
That first wave of governance was necessary. It was also, in retrospect, the tractable part of the problem. It was about controlling what employees deliberately put into AI tools. The inputs were visible, the risk was identifiable, and the policy intervention was relatively straightforward.
The second wave is harder, because it is about what AI tools are capturing from employees without any deliberate input at all. AI transcription and recording tools do not wait for an employee to paste something into a prompt box. They join the meeting. They listen. They transcribe everything. They store it. And they do all of this in the background of conversations that were designed to be private, candid, and, in many cases, legally protected.
This is the governance problem that most organizations have not yet solved. And unlike the first wave, it does not have a clean policy fix. It requires a rethinking of how organizations treat spoken conversation as data, who has the right to record it, where those recordings go, how long they are kept, and what legal obligations the existence of those recordings creates.
The Tools Are Everywhere and They Are Genuinely Useful
It would be a mistake to frame this purely as a technology problem to be eliminated. AI transcription tools, including Otter.ai, Fireflies.ai, Microsoft Copilot’s meeting transcription features, Zoom’s built-in AI summary service, and a growing ecosystem of competitors, are genuinely valuable. They free participants from the cognitive load of note-taking. They produce searchable records of decisions and action items. They make meetings more accessible for employees with hearing impairments or attention challenges. They reduce the friction of asynchronous collaboration across time zones.
The goal of a mature AI transcription governance framework is not to ban these tools. It is to govern them in a way that captures their utility while managing the serious legal and privacy risks their unrestricted use creates. That requires understanding what those risks actually are, and they are more varied and more serious than most organizations appreciate.
The Consent Problem Is More Complicated Than Most People Think
Recording laws in the United States operate on a patchwork basis that creates genuine compliance complexity for any organization whose employees work across multiple states, which is to say virtually every organization of meaningful size operating after 2020.
The federal Wiretap Act sets a one-party consent baseline: a recording is lawful if at least one party to the conversation consents to it. Under this standard, an employee who records their own meeting is the consenting party, and the recording is federally permissible regardless of whether other participants know about it.
But eleven states, including California, Illinois, Florida, Pennsylvania, and Washington, impose all-party or two-party consent requirements. In these states, every participant in a conversation must consent before it can be lawfully recorded. California’s Invasion of Privacy Act, codified at Penal Code Section 632, makes it a crime to record a confidential communication without the consent of all parties. The penalties include imprisonment of up to one year and civil damages of $5,000 per violation or three times the actual damages, whichever is greater.
Illinois has its own eavesdropping statute with similar teeth. And critically, the Illinois Biometric Information Privacy Act, which has generated more class-action litigation than virtually any other privacy statute in the country, may apply to AI transcription tools that use voiceprint analysis as part of their speaker identification features. BIPA requires written consent before collecting biometric identifiers including voiceprints, and provides for statutory damages of $1,000 per negligent violation and $5,000 per intentional or reckless violation. A single team meeting with ten Illinois-based participants, recorded without proper consent, creates potential BIPA exposure of $50,000 for that one call.
The practical compliance challenge is that when a California-based manager holds a video call with team members in Illinois, Texas, and New York simultaneously, determining which state’s consent law governs the recording is not a question most employees are equipped to answer in real time. Courts have not uniformly resolved which state’s law applies to multiparty calls where participants are located in different jurisdictions. The safest operational default, and the one most privacy counsel recommend, is to treat every meeting as if it requires the consent of all participants, regardless of where they are physically located.
That standard is meaningfully higher than what many employees currently practice when they activate an AI note-taker at the start of a call.
The Data That Is Being Captured Should Alarm Every HR and Legal Team
Consent law violations are the most legally acute risk, but they are not the only one. Consider the categories of information that a comprehensive AI transcription system will capture over the course of a normal month of workplace meetings:
- Performance and employment decisions: Discussions of performance improvement plans, termination decisions, promotion deliberations, and compensation reviews are now being transcribed and stored in third-party systems. These transcripts are discoverable in employment litigation. An offhand comment by a manager in what they believed was a private conversation can be retrieved, timestamped, and presented to a jury.
- Medical and disability information: Conversations about an employee’s medical accommodation request, a return-to-work discussion following FMLA leave, or a workplace injury claim routinely occur over video calls. The Americans with Disabilities Act and HIPAA impose strict confidentiality requirements on medical information in employment contexts. Those requirements do not stop applying because the conversation happened on Zoom with an AI note-taker present.
- Attorney-client privileged communications: When in-house counsel participates in a meeting that is being transcribed by a third-party AI tool, the attorney-client privilege that would otherwise protect that discussion may be compromised. Privilege can be waived by disclosure to third parties. Whether an AI transcription vendor constitutes a third party for privilege purposes, or whether it functions more like a court reporter within the scope of the privileged communication, is a question courts are still working through. Until there is clear precedent, the safest posture is to ensure that legal counsel’s meetings are identified as Do Not Record contexts.
- Trade secrets and proprietary strategy: Product roadmap discussions, M&A deliberations, competitive strategy conversations, and financial planning meetings are all potential targets for AI transcription. When those transcripts sit in an employee’s personal account on a consumer AI tool, they exist outside the organization’s information security perimeter and outside its data breach response framework.
- Union organizing and labor relations: The National Labor Relations Act protects employees’ rights to engage in concerted activity, including discussions about wages, working conditions, and organizing. Employers who record or transcribe conversations in which employees discuss these topics may face unfair labor practice charges even where the recording was technically consensual.
The Retention Problem Nobody Is Talking About
Even organizations that have addressed consent and data classification have often overlooked a third dimension of the AI transcription problem: what happens to recordings after they are made.
Most AI transcription tools store recordings and transcripts in the user’s personal account within the vendor’s cloud infrastructure. Unless the organization has deployed an enterprise license with centralized administration, those recordings exist entirely outside the organization’s data governance framework. IT cannot see them. Legal holds cannot reach them automatically. Retention schedules do not apply to them. They sit indefinitely, accumulating, until the employee leaves the company or deletes them, at which point they may still exist in vendor backups.
From a data minimization standpoint, this is a direct violation of one of the foundational principles of every major privacy framework. The GDPR requires that personal data be kept no longer than necessary for the purpose for which it was collected. The CCPA’s regulations impose similar data minimization expectations. A meeting transcript that serves no ongoing business purpose but that has been sitting in an employee’s Otter.ai account for three years is not compliant with those principles, even if the original recording was made with proper consent.
The litigation risk is equally real. In employment disputes, the existence of a recording that a party failed to disclose in discovery can lead to sanctions, adverse inference instructions, or worse. An employee who recorded a termination meeting, stored the transcript in a personal AI account, and then denied in discovery that any recording existed has created a serious problem for their case. An employer whose managers routinely used AI transcription but whose legal team had no visibility into those recordings faces comparable exposure when a litigation hold is triggered and responsive materials turn out to be scattered across dozens of personal vendor accounts.
The Behavioral Dimension: What Recording Does to the Conversation Itself
There is a governance consideration here that sits outside pure legal risk and deserves attention on its own terms. The existence of a recording changes the nature of a conversation. This is not a theoretical observation. It is a behavioral reality with practical consequences for organizations that care about candor, psychological safety, and effective management.
Performance improvement conversations are most effective when both parties can speak frankly. A manager who knows a conversation is being transcribed and stored may soften feedback that needs to be delivered clearly. An employee who discovers mid-call that the discussion is being recorded may shift into defensive mode, saying less, qualifying more, and treating the conversation as a legal record rather than an honest exchange. The transcript that results may be factually accurate and still completely miss the point of why the conversation needed to happen.
The same dynamic applies to mentorship conversations, skip-level meetings, DEI focus groups, team retrospectives, and any other workplace discussion whose value depends on psychological safety. When every word is potentially a permanent record, people stop saying the things that need to be said. Organizations that record everything do not get more information. They get more careful information, which is a different and often less useful thing.
This is one reason that a governance framework for AI transcription tools should include a clear list of meeting types where recording is prohibited, not just regulated. Some conversations serve their purpose precisely because they are not documented. Effective governance preserves space for those conversations.
A Practical Framework: Where to Start
- Audit what is already happening before writing a single policy. Before updating the AI use policy, find out what AI transcription tools employees are actually using, whether through officially sanctioned licenses or personal accounts. Survey department heads. Review the applications that have been authorized to connect to the organization’s video conferencing platforms. You cannot govern what you have not mapped.
- Update the AI acceptable use policy to address transcription and recording specifically. A policy that prohibits pasting sensitive data into AI tools but says nothing about AI recording is already obsolete. The updated policy should address which transcription tools are approved for use, consent requirements for all meetings regardless of jurisdiction, categories of meeting that may not be recorded under any circumstances, and employee obligations when another participant joins a call with an unauthorized AI note-taker.
- Create and communicate a Do Not Record list. Some meeting types should be categorically off-limits for AI transcription regardless of consent. At minimum, this list should include performance improvement plan discussions, medical accommodation conversations, termination and disciplinary meetings, attorney-client privileged discussions, compensation and benefits negotiations, union or labor relations discussions, and executive strategy sessions involving material non-public information.
- Address storage, access, and retention explicitly. Require that any approved AI transcription tool be deployed under an enterprise license with centralized administration, not personal accounts. Establish a retention period for transcripts aligned with the organization’s broader data governance schedule and build in automatic deletion. Determine who has access to meeting recordings within the organization and document that access in a data map.
- Train managers specifically, not just employees generally. Managers are the people most likely to be in the meetings where AI transcription creates the greatest legal exposure, and they are also the people most likely to assume that useful technology is permissible technology. Targeted training for people managers on recording consent, Do Not Record categories, and the litigation implications of stored transcripts is more valuable than a general policy rollout.
- Build a feedback loop into the governance framework. AI transcription tools are evolving quickly. New tools appear regularly. Employee behavior adapts faster than policy. A governance framework that is reviewed once and then left static will be outdated within a year. Establish a process for employees to flag new tools they are using, and schedule a minimum annual review of the transcription governance framework against the actual landscape of tools in use.
The Deeper Governance Principle
The AI transcription problem is ultimately a specific instance of a broader principle that the second wave of AI governance has to grapple with: the most serious AI risks in the workplace are not the ones created by employees deliberately feeding data into AI systems. They are the ones created by AI systems passively capturing data from employees who are simply doing their jobs.
The first wave of governance asked: what are employees putting into AI? The second wave has to ask: what is AI taking out of us? Every meeting, every sensitive conversation, every candid exchange between a manager and a report, every discussion that was never intended to become a searchable permanent record is now potentially one notification away from becoming exactly that.
Organizations that get ahead of this will protect themselves from legal exposure, preserve the trust that makes difficult workplace conversations possible, and demonstrate to employees that their privacy is taken seriously even inside the organization’s own walls. Organizations that treat AI transcription as someone else’s problem will eventually find out it was theirs all along, typically at the worst possible moment.
The recording has already joined the meeting. The question is whether your governance framework was in the room first.