The Scribe in the Room: Canada’s Fractured Approach to AI in the Exam Room

Table of Contents

There is something quietly profound happening in clinics and hospitals across Canada. A physician sits with a patient, and somewhere in the background — on a laptop screen, in a cloud server, in the hands of a software company operating under terms of service few patients have read — an artificial intelligence is listening. It is parsing the conversation, extracting clinical details, and drafting documentation that once consumed hours of a doctor’s evening. It is, by most accounts, making medicine more efficient. It is also, by any honest accounting, one of the most significant expansions of health data processing that Canadian privacy law has yet been asked to govern.

That AI medical scribes have arrived faster than regulatory consensus is not a surprise. That they have arrived faster than regulatory consistency is a problem worth examining carefully.

At the IAPP Canada Symposium 2026 in Toronto, officials from three provincial data protection authorities sat together to describe how their respective offices have approached guidance for AI scribe tools in healthcare. What emerged from that conversation was not a picture of a unified national framework quietly doing its job. It was a picture of three jurisdictions looking at essentially the same technology, the same sensitivity of data, and the same underlying population of patients — and arriving at meaningfully different answers about how to protect them. Canada’s fragmented health privacy landscape, long an administrative headache for healthcare organizations operating across provincial lines, has now produced something more consequential: fragmented AI governance at precisely the moment it matters most.

What AI Scribes Actually Do — and Why It Matters for Privacy

Before engaging with the regulatory question, it is worth being precise about what AI scribe tools actually do, because the privacy implications are substantially more complex than the headline “AI takes doctor’s notes” suggests.

In a typical deployment, an ambient AI scribe captures the audio of a clinical encounter — a conversation between a physician and a patient that may include mental health disclosures, sexual health history, substance use, financial stressors, relationship dynamics, and other information patients share with their doctors under an expectation of intimate confidentiality. That audio is processed, often by a large language model, to produce a structured clinical note. Depending on the vendor, the audio may be retained, the transcription may be stored, the model may be trained or fine-tuned on aggregate encounter data, and the output may flow into an electronic health record system that itself has its own data sharing arrangements.

At every one of those steps, a decision is being made about sensitive health information that belongs, in the most meaningful sense, to the patient. And yet in most deployments, the patient’s awareness of the AI’s presence — let alone their meaningful consent to it — is limited at best.

This is not a hypothetical concern. It is a live one. Vendors are actively marketing to health systems. Physicians are adopting tools that integrate with their workflows. And provincial data protection authorities are, to their credit, issuing guidance. The question is whether that guidance is adequate, and whether its fragmentation across provinces is creating conditions that underserve patients and create compliance chaos for providers.

The Problem with Patchwork

Canada’s health privacy landscape is notoriously complex. Quebec has its Law 25, now among the most rigorous privacy regimes in North America. Ontario has PHIPA. British Columbia has PIPA and the E-Health (Personal Health Information Access and Protection of Privacy) Act. Alberta has PIPA as well, though with meaningful differences. Federally, PIPEDA — and its forthcoming successor, Bill C-27’s Consumer Privacy Protection Act — creates yet another layer that applies in provinces without substantially similar legislation.

This was already a difficult environment for any organization trying to deploy health technology at scale. What the AI scribe conversation reveals is that fragmentation is not merely a compliance inconvenience. It creates substantive gaps in patient protection that the healthcare system has not yet reckoned with honestly.

Consider a common scenario: a national telehealth platform deploys an AI scribe tool for physicians practicing across multiple provinces. Which provincial guidance governs the encounter? What happens when a patient in Ontario speaks with a physician licensed in British Columbia, on a platform operated from Quebec, using a scribe product hosted on U.S. infrastructure? No single set of guidance answers that question cleanly. And yet the patient — who almost certainly does not understand the data governance complexity of their own virtual appointment — has no less an interest in having their sensitive disclosures protected regardless of which province’s authority happens to have issued the most recent FAQ.

The DPA officials at the IAPP symposium are not responsible for creating this fragmentation. They are doing their jobs within the constitutional and legislative realities they were given. But the fragmentation they described is a symptom of a structural problem that individual guidance documents cannot solve. Provincial guidance, however thoughtful, is inherently reactive and jurisdictionally bounded. AI scribe vendors are not.

What Good Guidance Needs to Address

The conversation in Toronto surfaced the contours of what regulators are grappling with. Several issues deserve particular attention as guidance matures.

Consent, and what it meaningfully requires. Many current deployments treat patient notification — a sign in the waiting room, a verbal disclosure at the start of an encounter — as sufficient to satisfy consent obligations. This is legally defensible in many contexts but ethically thin. Informed consent in the context of health data has always meant something more than notification. Patients should understand what is being captured, where it goes, how long it is retained, whether it may be used to train AI systems, and what their alternatives are if they decline. A passive notice does not accomplish this. Some guidance documents are beginning to acknowledge this gap; more need to.

Vendor accountability and data residency. AI scribe tools are predominantly products of U.S.-based technology companies. The sensitivity of health data, combined with the jurisdictional reach of U.S. federal law over data held on American infrastructure, creates genuine concerns that provincial guidance has not uniformly addressed. Some authorities have emphasized the importance of data residency requirements and contractual controls over subprocessors. Others have been less prescriptive. This inconsistency creates incentives for vendors to structure their deployments to minimize exposure to more stringent requirements — a regulatory arbitrage dynamic that serves no one’s interests except the vendors’.

Secondary use and model training. Perhaps the most underexamined issue in current guidance is the question of whether, and under what conditions, de-identified or aggregated clinical encounter data may be used to train or improve AI models. Vendors have obvious commercial incentives to use the rich signal in real clinical encounters to improve their products. The interests of patients and the healthcare system in better AI tools are not trivial either. But the line between acceptable secondary use and a concerning erosion of the contextual integrity of health disclosures has not been clearly drawn by most regulators. Guidance that fails to address this directly leaves a very large and commercially significant question unanswered.

Clinician accountability. There is a tendency in AI governance conversations to treat the clinician as a relatively passive actor — someone who chooses a vendor and then bears responsibility for ensuring the tool is used appropriately. This framing understates the structural power imbalance. Clinicians are often choosing between vendor options provided by their institution, using procurement decisions made above their level, under workflow pressures that leave little room for deep due diligence on privacy practices. Guidance should grapple with where institutional responsibility lies, not merely what individual practitioners should do.

The Case for National Coordination

The fragmented picture that emerged in Toronto is not inevitable. Canada has mechanisms for interprovincial cooperation on privacy questions — the Federal-Provincial-Territorial Privacy Commissioner meetings, the work of the Conference of Privacy and Data Protection Authorities — and there is precedent for coordinated guidance on issues of national significance. The joint work of commissioners on facial recognition, and the coordinated investigation into Clearview AI, demonstrate that the infrastructure for collective action exists.

The case for using those mechanisms aggressively on AI health scribes is strong. The technology is expanding rapidly. The data it touches is among the most sensitive that exists. The population affected is not a subset of Canadians who have opted into a particular service — it is anyone who has a medical appointment. And the harm that flows from inadequate protection is not abstract: it includes discrimination in insurance and employment based on health disclosures, stigma around mental health and substance use conditions, and the erosion of the trust that makes patients willing to be honest with their physicians in the first place.

That last harm may be the most serious. The therapeutic relationship depends on patients believing that what they say in an exam room stays there. If that belief is undermined — either because protections actually fail, or because patients simply do not know what happens to their data and default to caution — the clinical consequences are real. Patients who do not disclose symptoms do not get treated. Patients who avoid honest conversations about mental health, substance use, or sexual health because they fear the downstream implications of that disclosure are patients whose outcomes will be worse.

What Comes Next

The provincial DPAs who appeared in Toronto are doing important work. Guidance is better than silence. Engagement with the question of AI in healthcare is better than avoidance. But guidance alone — issued province by province, updated on variable timescales, with varying degrees of prescriptiveness on the questions that matter most — is not enough for a technology that is being deployed nationally, by multinational vendors, in a healthcare system whose patients reasonably expect to be protected regardless of their postal code.

The path forward requires federal engagement on cross-provincial health AI governance, not merely as an aspiration but as a political priority. It requires health regulators and privacy regulators to work more closely together — the decisions that hospital procurement committees and regional health authorities make about AI scribe vendors are, fundamentally, privacy decisions, and they need to be made with that understanding front and center. It requires vendors to be held to standards that are not negotiable on the basis of commercial convenience. And it requires patients to be treated as the principals in their own health data story — not as the residual beneficiaries of a privacy framework designed around institutional interests.

The AI scribe is in the room. The question is whether the law is in the room with it.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.