The investigation Proof News published this week is the kind of story compliance teams should read carefully, set aside, and then read again. A nurse practitioner in Florida used her employer’s Talkspace benefit during one of the most vulnerable periods of her life — pregnancy discrimination, job loss, financial pressure, weeks from giving birth — and confided in a therapist by text. Two years later, when she sued that same employer, the employer’s lawyers obtained a court order producing every word she had ever typed to her therapist on the platform. The therapist, speaking on condition of anonymity to Proof News reporter Annie Gilbertson, said she was shocked at how much information the company held.
This is not a breach. It is not a failure of Talkspace’s security controls. It is the system working exactly as it is designed to work — and that is the part that matters.
The Kamrass case is a clean illustration of a structural problem in the way US mental health privacy is built: a regulatory architecture written for paper-era therapy notes is now governing real-time message archives at a scale the architecture’s drafters did not contemplate. The result is a confidentiality regime that looks robust from the outside and turns out, in cases like this one, to have load-bearing exceptions wide enough to drive a deposition through.
For privacy and compliance professionals — particularly those advising employers offering mental health benefits, healthcare entities procuring telehealth platforms, and any business processing inferred or sensitive health data — there is a lot to learn here that will not appear in a HIPAA refresher.
Note the cookie consent banner on the Talkspace.com website does not work and engages in a “dark pattern“. This should be fixed right away.

The Specific Legal Mechanics That Made This Possible
Three pieces of HIPAA’s architecture, in combination, allowed Kamrass’s therapy transcript to be produced in litigation. Each is well-documented in isolation. The interaction is what most people miss.
The court-order exception. Under 45 CFR 164.512(e), a covered entity or business associate may disclose protected health information in response to a court order, subpoena, or other legal process, subject to certain notice requirements. This is not a loophole; it is an explicit, deliberate carve-out. HIPAA was never intended to override the legal system’s discovery process. When opposing counsel obtains an order compelling production of a litigant’s mental health records, the entity holding those records is generally permitted — and, with the appropriate process, required — to comply.
The narrow definition of “psychotherapy notes.” HIPAA does provide heightened protection for “psychotherapy notes,” requiring patient consent before disclosure for most purposes. But the regulatory definition of psychotherapy notes is much narrower than most people assume: under 45 CFR 164.501, the term covers notes recorded by a mental health professional documenting or analyzing the contents of conversation during a private counseling session, and kept separate from the rest of the individual’s medical record. The verbatim transcript of an asynchronous text-based therapy session is — almost certainly not — a “psychotherapy note” under this definition. It is a session record, treated like any other clinical communication, and the heightened consent protection does not attach.
The business-associate framing. When a therapist on a telehealth platform is providing services through a covered entity (an insurer, employer-sponsored health plan, or healthcare provider), the platform itself is typically operating as a business associate. Business associates are subject to HIPAA but are also subject to its disclosure carve-outs. They produce records when legally compelled, just as a covered entity would.
The combined effect is that a person’s most intimate disclosures, captured in machine-readable form on a platform designed for the convenience of asynchronous communication, are governed by a confidentiality regime with explicit, well-established escape valves — and with a heightened-protection category that, by its own definition, does not apply.
This is not a Talkspace-specific problem. Any text-based or video-based telehealth platform that creates session records faces the same architecture. The novelty is the volume, the searchability, and what happens to those records once they exist.
The Training Data Dimension
The Proof News story reports a fact that should land harder than it has in the broader privacy press: Talkspace’s CEO has told investors the company has compiled “8 billion words, 140 million messages, 6.2 million assessments,” and that this corpus is being used to train a “therapy companion” chatbot called TalkAI, with the eventual goal of securing insurance reimbursement for the automated tool.
Set aside the clinical question of whether AI therapy companions are safe. The privacy question is the one that matters here.
Talkspace executives have assured investors and consumers that data is anonymized before training. The Electronic Frontier Foundation’s Tori Noble, quoted in the Proof News piece, made the point that any privacy professional already knows: anonymization of free-text mental health disclosures is, in practice, extraordinarily difficult, and reidentification of “deidentified” health data has been demonstrated repeatedly in academic literature. The information density of an honest therapy conversation — references to family members, specific events, employers, locations, dates, distinctive emotional patterns — makes it among the hardest classes of personal data to genuinely anonymize.
This matters because the legal status of the training corpus depends on its anonymization holding. HIPAA’s deidentification standard under 45 CFR 164.514 — either the safe harbor method (removing 18 enumerated identifiers) or the expert determination method — provides a regulatory path to non-PHI status. But that path is a snapshot judgment. If the corpus is later combined with auxiliary data, or if the model trained on it memorizes and emits identifying fragments, the deidentification can fail in retrospect. The privacy regime governing the training set is doing a lot of legal work, and the work is more fragile than the public statements suggest.
The acquirer, Universal Health Services, is paying $835 million for Talkspace. A meaningful portion of that valuation is the data corpus and the AI capability built from it. The economics are now structurally pulling in the direction of more intensive use of mental health conversations as training data, not less.
The US-EU Asymmetry, Quietly Visible in the Privacy Policy
The Proof News piece flags, briefly, a detail that deserves more emphasis: Talkspace’s privacy policy gives EU residents the right to opt out of certain personal data processing — and does not extend the same right to US residents, who are limited to deletion requests.
This is the real-world expression of a structural asymmetry that has been visible in US privacy law for a decade. EU residents have meaningful, affirmative rights over their data (Article 21 right to object, Article 22 right not to be subject to solely automated decision-making, the special-category protections of Article 9). US residents, even under the most aggressive state laws, generally do not have an equivalent right to object to processing on legitimate-interests grounds, because the US regime does not run on legitimate interests in the first place.
Washington’s My Health My Data Act, the most significant US law specifically addressing mental health data outside HIPAA, narrows this gap for Washington residents. A handful of other states are following with health-data-specific provisions. But the patchwork is incomplete, the protections are unevenly enforced, and the federal floor — HIPAA, the FTC Act, and the Health Breach Notification Rule — is the same floor that allowed the Kamrass disclosure.
For compliance teams advising US-headquartered platforms, the asymmetry creates a question that often gets pushed to the legal team and ignored: should the US privacy posture mirror the EU one, even where US law does not require it? The answer most platforms reach is no, on cost grounds. The Kamrass case illustrates one of the costs of that answer.
What This Means for Employers Offering Mental Health Benefits
Many of the readers of this article work at companies that offer Talkspace, BetterHelp, Lyra, Modern Health, or comparable platforms as an employee benefit. The Kamrass case has implications that should be on the next benefits-procurement and ERISA-counsel agenda.
The Kamrass disclosure was lawful. AdventHealth’s lawyers obtained a court order. The platform produced the records. Nothing in that chain was procedurally wrong under HIPAA. The structural fact employers should sit with is that the same chain runs in reverse when the employer is the defendant: an employee’s plaintiff’s lawyer can subpoena the same kind of records, in the same way, with the same result.
The mental health benefit you are offering is a record-creating system. If your employees use it during periods of workplace stress — and they do, in volume — that record may be discoverable in any future employment, workers’ compensation, disability, harassment, or whistleblower litigation. Whether this is a reason to stop offering the benefit is a different question. Whether your benefits communications adequately disclose this dimension to employees — almost universally, no — is the question that should be considered.
Practical implications for compliance and benefits teams:
- Review the platform’s privacy practices and training-data uses as part of vendor due diligence, not just the security and HIPAA-business-associate posture.
- Examine the disclosures provided to employees at point of enrollment. The standard mental health benefit communication does not mention AI training, court-order exposure, or the limits of “anonymized” data.
- Consider whether any employer-side observability into the platform (administrative dashboards, aggregate reporting) creates additional disclosure surface. In most cases it does, even when nominally aggregated.
- For employers with a meaningful EU/UK workforce, the asymmetry between EU and US user rights is going to surface in works council and DPA inquiries; have a position prepared.
What This Means for Compliance Teams Procuring Mental Health Platforms
For privacy and compliance leads sitting in the procurement seat for telehealth, behavioral health, or general wellness platforms, the Kamrass case is a useful prompt to harden the standard mental health vendor questionnaire. A short list of questions that almost no contemporary vendor questionnaire asks, but should:
- What categories of personal information, generated through the service, are used to train AI models — including foundation models, in-house fine-tunes, or any vendor-provided AI capability?
- What deidentification methodology is applied prior to training, and has it been independently reviewed?
- Has the vendor experienced any prior litigation, subpoena response, or law-enforcement request involving session records, and on what scale?
- Are session records, message logs, audio, and video subject to the same retention period? If retention varies by data type, what is the schedule?
- What is the vendor’s policy on objecting to subpoenas or court orders before complying, and what is the standard notice timeline to affected users?
- Are there meaningful differences between the rights afforded to EU/UK users and US users, and what is the rationale?
- For platforms incorporating AI-driven tools (chatbots, risk algorithms, automated assessments), what is the human-review threshold, and what is the disclosure to the patient about the role of automation in their care?
- What is the data flow when the user is also covered by a state law with heightened mental-health protections (Washington MHMDA, Connecticut’s CTDPA sensitive-data treatment, others)?
A platform that cannot answer these clearly is not necessarily a bad platform. A platform that resists answering them, or whose answers do not match the public privacy policy, is.
What This Means for the Coming Wave of AI Therapy Tools
The Talkspace disclosure intersects with a separate, fast-moving regulatory front: state-level legislation specifically addressed to AI therapy tools.
Illinois banned therapy bots in 2025. A California legislator introduced a similar measure in January 2026. Therapist unions are increasingly demanding contractual prohibitions on AI replacement of clinicians, and Kaiser Permanente’s therapist union struck on March 18 over exactly this issue. The American Psychological Association has been in a multi-year, sometimes-contentious dialogue with Talkspace over the company’s clinical and privacy practices, including the libel suit Talkspace filed against PsiAN co-founder Linda Michaels in 2019.
The trajectory is unmistakable. AI mental health tools are going to be regulated, both as products (FDA pathway questions) and as practices (state-level bans, scope-of-practice rules, union-bargained prohibitions). The privacy regime governing the training data behind those tools is going to come under significant scrutiny. The Kamrass case is one early data point in a much longer story.
For organizations building or deploying AI tools in mental health, the Captain Compliance read is straightforward. The legal posture available today — “we anonymized the corpus, we comply with HIPAA, our terms of service include consent” — is not the legal posture available three years from now. Build the program against where the regulation is going, not where it is.
What This Means for Individuals
Most of this article is written for compliance and privacy professionals. The single sentence written for everyone else: if you are using a text-based or video-based therapy platform, your sessions are being recorded, the recordings are being kept, and those recordings are subject to the legal-process exceptions in your country’s privacy regime. None of that is necessarily a reason not to use the platform. It is a reason to know what you are using.
A traditional in-person therapy session, in most US jurisdictions, leaves behind a few sentences of clinical notes that are not subject to easy production. A text-based session leaves behind a verbatim transcript that is. Whatever else changes in the digital health ecosystem, that asymmetry is not going away.
Kamrass Case Privacy Case Law
The Kamrass case is not a story about a privacy failure at a single company. It is a story about a privacy regime that was written for one set of facts and is now governing a very different set of facts, with structural exceptions that produce predictable, hard outcomes when those exceptions are exercised. HIPAA was not designed to govern 140-million-message training corpora. The “psychotherapy notes” carve-out was not designed for verbatim asynchronous transcripts. The deidentification standard was not designed for the inference power of modern AI.
That gap between architecture and reality is where the next several years of mental health privacy enforcement, litigation, and legislation will play out. The compliance teams that read cases like Kamrass as legal news rather than as a signal about where the regulation is heading will find themselves three years behind the curve at exactly the moment that being three years behind the curve gets expensive.
This is a sensitive topic, and if any of what you read here resonates personally — about your own mental health or someone you care about — there are professionals and resources that can help. The point of this article is to look honestly at the architecture; the point of seeking support is to look honestly at the person.
Captain Compliance covers health-data privacy, the intersection of AI and consumer protection, and the operational compliance work that follows from cases like this one.