The AI Scribe Breach That Exposed Patients: Inside the IPC Letter Every Hospital Should Read

Table of Contents

On September 23, 2024, a routine virtual hepatology rounds meeting at an Ontario hospital turned into a case study in AI risk. An Otter.ai “notetaker” bot slipped into the call, recorded physicians discussing seven inpatients, and then emailed a transcript and summary to 65 people including former staff who no longer worked at the hospital.

The Information and Privacy Commissioner of Ontario (IPC) has now published a detailed letter about that incident under the Personal Health Information Protection Act (PHIPA), file HR24-00691. It’s not just a one-off breach write-up; it’s effectively a blueprint for how regulators expect hospitals to manage AI transcription tools, off-boarding, and “shadow” tech that touches patient data.

What Actually Happened in the Otter.ai Hospital Breach

The story isn’t about a hospital-approved AI rollout gone wrong. It’s messier and more familiar: a mix of human shortcuts and consumer tech and the reason for AI Governance.

According to the IPC letter:

  • A hospital physician left employment in June 2023 but remained on a recurring hepatology rounds calendar invite.
  • That physician had joined the meeting group using a personal email address, contrary to hospital policy.
  • In September 2024, he installed Otter.ai on a personal device. The tool integrated with his calendar and saw the old rounds invite.
  • When the September 23, 2024 rounds meeting started, the Otter “bot” joined via the link in his personal calendar and recorded the discussion without the knowledge of the participants.
  • Afterward, Otter automatically sent a meeting summary and transcript link to 65 invitees, 12 of whom had already departed the hospital.

The transcript captured the personal health information (PHI under PHIPA) of seven patients: names, sex, treating physician, diagnoses, medical notes, and treatment details.

The hospital contained the damage by:

  • Cancelling the digital invite so the tool couldn’t join future meetings.
  • Instructing recipients to delete the email and confirming deletion with the majority of them.
  • Blocking AI scribe tools such as Otter.ai and DeepSeek at the firewall level for on-site users.
  • Updating training and policies to explicitly address AI tools and “approved only” technology rules.

The IPC acknowledged those efforts, but the letter pushes the hospital further and spells out specific expectations for AI governance, off-boarding, and breach handling.

Inside the IPC’s Recommendations: A De Facto AI Governance Checklist

The IPC’s analysis focuses on two big themes: containment of the current breach and structural changes to prevent the next one.

1. Containment Must Include Direct Vendor Engagement

The hospital had told the former physician to ask Otter to delete the transcript. He didn’t confirm that he had done so, and there was no proof the data was erased.

The IPC’s position is blunt: as the health information custodian, the hospital should have directly contacted Otter.ai to demand deletion of any PHI related to the September 23 meeting. Relying on the individual account holder — especially an ex-employee — wasn’t good enough.

This is a subtle but important line: once an AI tool processes PHI about your patients, regulators expect you to treat that vendor like any other data recipient. That means:

  • Direct deletion requests by the custodian, not just “please ask your app to delete it.”
  • Documentation of what was requested and what the vendor did.
  • Folding these actions into your formal breach protocol, not handling them ad hoc.

2. Off-Boarding and Email Hygiene Are Security Controls

The breach hinged on a simple failure: no one removed the departing physician from the recurring rounds invite, and he was never required to use a work email address in the first place.

The IPC recommends:

  • Auditing off-boarding processes to ensure all access to systems, including calendar invites, is revoked immediately on departure.
  • Updating the Acceptable Use Policy to make it explicit that hospital business (including virtual rounds) must be conducted from hospital-approved devices and credentials, not personal accounts or phones.
  • Enforcing “meeting lobbies” so hosts must manually admit every participant before PHI is discussed, reducing the chance that a bot quietly joins.

That’s a quiet redefinition of what counts as a security control: meeting invites and lobby settings are not just convenience features; they’re part of your privacy program.

3. A Formal AI Governance and Accountability Framework

The hospital already had an AI governance program and committee, with the Privacy Office involved. The IPC leans on that and goes further, recommending that AI scribes be governed through a formal framework with at least these components:

  • An AI governance committee with cross-functional membership.
  • Policies, practices, and procedures specific to AI tools.
  • Training and awareness focused on AI risks.
  • Initial and ongoing assessment, monitoring, and testing of AI systems.
  • A documented AI risk-management framework.
  • Human oversight over AI outputs and workflows.
  • Complaint, inquiry, and recourse mechanisms.
  • Notification and reporting channels for AI-related incidents.
  • Confidentiality and end-user agreements, plus contractual safeguards with vendors.

In other words, hospitals don’t get to treat AI scribes as “just another app.” If they touch PHI, they sit squarely inside the privacy and governance program and as we’ve seen with these multi-million dollar

How This Maps Against U.S. HIPAA/HITECH Expectations

The Ontario case is under PHIPA, not HIPAA, but the fact pattern translates easily to a U.S. context.

AI Scribes as Business Associates

Under HIPAA, Otter.ai-style services that receive or create protected health information (PHI) on behalf of a covered entity are almost certainly “business associates.” That status triggers:

  • A written Business Associate Agreement (BAA) before PHI flows to the tool.
  • Application of the Security Rule’s administrative, physical, and technical safeguards directly to the vendor, extended and reinforced by the HITECH Act.
  • Direct civil and criminal liability for the vendor when they mishandle PHI.

In the Ontario case, the hospital had explicitly not approved Otter.ai. In a HIPAA environment, a physician installing an AI scribe on a personal device and piping rounds into it without a BAA would look like classic “shadow IT” and a straightforward violation of the covered entity’s policies.

Breach Notification Obligations

The HIPAA Breach Notification Rule requires covered entities to notify affected individuals, HHS, and in some cases the media when unsecured PHI is acquired, accessed, used, or disclosed in a manner not permitted by HIPAA.

A HIPAA analysis of this fact pattern would likely focus on:

  • Unpermitted disclosure of PHI to an AI vendor with no BAA in place.
  • Unpermitted disclosure to former employees and potentially external addresses.
  • Whether the PHI was encrypted or otherwise “secured” under HHS guidance (here, it clearly wasn’t).

HIPAA doesn’t yet say “you must have an AI governance committee,” but the logic of the IPC letter tracks closely with U.S. enforcement trends: regulators expect covered entities to know where PHI flows, to control vendors, and to treat new technologies like AI scribes as part of the regulated ecosystem, not outside it.

Where AI Governance Law Is Heading: EU AI Act and Beyond

The IPC’s recommendations also align with the broader direction of global AI governance, especially in the EU.

EU AI Act: Health AI as High-Risk

The EU AI Act entered into force on August 1, 2024, with core obligations for high-risk systems set to phase in over the coming years.Health-related AI applications are generally treated as “high-risk,” meaning:

  • Providers and deployers must implement risk-management processes, data-governance and quality requirements, transparency measures, human oversight, and post-market monitoring.
  • There must be logging and traceability so organizations can reconstruct what an AI system did in case of an incident.
  • There are significant potential fines for non-compliance, on the order of up to 7% of worldwide revenue for the most serious violations.

An AI scribe that processes clinical conversations and generates documentation would likely fall into the high-risk bucket when deployed in the EU. Many of the IPC’s recommendations — AI governance committee, risk management framework, human oversight, vendor contracts, and incident reporting — look like a national-level implementation of those same ideas.

Recent political debates in Europe have focused on the timeline and scope of these rules, including proposals to delay or lighten some high-risk obligations. But the direction of travel is clear: health-sector AI tools will be among the most tightly regulated.

Canada’s PIPEDA, Bill C-27, and AIDA

Canada’s private-sector privacy law, PIPEDA, already requires organizations to obtain meaningful consent, limit collection and use, safeguard personal information, and report certain breaches that create a real risk of significant harm. The federal privacy commissioner has also published AI-specific guidance and proposed a regulatory framework for AI within a reformed version of PIPEDA.

Bill C-27 — which would have introduced the Consumer Privacy Protection Act (CPPA), a data-protection tribunal, and the Artificial Intelligence and Data Act (AIDA) — stalled and ultimately died on the Order Paper in early 2025. Even so, it’s clear that Ottawa intends to regulate “high-impact” and general-purpose AI systems, and health-sector use cases will inevitably sit near the top of that risk hierarchy.

The IPC’s Otter.ai letter reads like a preview of what detailed, sector-specific AI expectations will look like in Canada even before a new federal AI statute lands: strong governance at the institutional level, clear rules for staff, and direct accountability for vendors.

Data Privacy Lessons: It’s Not Just “AI Risk” — It’s Governance Risk

It’s tempting to read this as a cautionary tale about Otter specifically, or AI note-takers generally. But the real failure points are human and organizational:

  • A personal email address used for work meetings, despite policy to the contrary.
  • A departing physician left on a recurring invite for over a year.
  • No enforced lobby or participant-approval mechanism for meetings where PHI is routinely discussed.
  • No robust procedure for the hospital itself to engage third-party vendors when data needs to be deleted.

AI just made those weaknesses visible — and scalable.

For hospitals, health systems, and even clinics using “simple” tools like AI scribes or transcription apps, the minimum bar now looks something like this:

  • Map where PHI actually flows. If a bot can join your Zoom rounds, it’s part of your data-flow diagram and your risk register.
  • Lock down off-boarding and meeting hygiene. When someone leaves, their access to calendars, invites, and shared drives should disappear as reliably as their badges and EMR logins.
  • Treat AI vendors as regulated partners, not gadgets. BAAs in the U.S., formal contracts and deletion rights in Canada and the EU — and documented processes to use them when incidents occur.
  • Build a real AI governance function. Not a slide deck or an aspirational policy, but a cross-functional group with teeth, ownership of risk assessments, and authority over what gets deployed.

Privacy software and governance platforms can help here — whether that’s tracking vendors and risk assessments, centralizing data-mapping, or managing consent and incident workflows. Tools like ours here at CaptainCompliance.com, for example, are increasingly being asked not just to handle cookies and web tracking, but to give organizations a single place to track all the systems that touch personal or health information, including AI transcription and documentation tools. Simply put we can automate your privacy requirements and keep you compliant.

Where This Leaves Healthcare Organizations

The Ontario case is small in raw numbers — seven patients, one meeting, one unapproved AI bot. But the IPC’s letter is written as if the next incident will be bigger and more complex, and it almost certainly will be.

If you’re in the health sector, the practical questions now are straightforward:

  • Do you know which AI tools your clinicians are actually using — on their own phones, laptops, and calendars?
  • Could you, today, send a credible deletion demand to every third-party that might hold your patients’ PHI?
  • Does your breach protocol explicitly cover AI tools, shadow apps, and personal devices?
  • Is there a single group inside your organization with the authority to say “no” to a shiny new AI scribe if the governance isn’t there?

Regulators in Ontario, under HIPAA/HITECH in the U.S., and under PIPEDA and the upcoming EU AI Act are converging on the same idea: AI in healthcare is not a side project. It’s infrastructure. And once it touches patient data, it’s subject to the same (or stricter) expectations as your EMR, your lab system, and your patient portal.

If your privacy and AI governance program doesn’t reach that far yet, this is a good moment to fix that — before the next “uninvited bot” shows up in your own rounds meeting.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.