When the Voice Isn’t Real but the Legal Risk Is: AI Governance in the Creative Industries

Table of Contents

Synthetic voices, AI-generated likenesses, and deepfake performances are forcing a reckoning across entertainment, advertising, and media — and the legal frameworks scrambling to keep up are drawing from older principles than you might expect.

The creative industries have always run on the commodification of identity. An actor’s face. A musician’s voice. An athlete’s likeness on a jersey or a cereal box. The right of a person to control how their identity is used commercially — and to be compensated for it — is one of the foundational legal concepts undergirding the entire entertainment economy.

Generative AI didn’t invent the tensions in that economy. But it has supercharged them in ways that existing frameworks were never designed to handle at scale. When a studio can synthesize a deceased actor’s voice from archival recordings, when an advertiser can generate a celebrity likeness without a contract, when a streaming platform can produce AI-voiced audiobooks that sound indistinguishable from human narrators — the question of who controls what, and under what legal theory, becomes urgent in ways that can’t be deferred.

What’s emerging in response isn’t a clean new regulatory architecture. It’s a patchwork of legislation, contract innovation, and regulatory guidance that, as SiriusXM’s Associate General Counsel for Privacy Haley Fine has noted, remains “rooted in longstanding privacy, publicity and consumer protection principles.” Understanding what that means — and where those roots don’t reach — is the essential compliance challenge for anyone working at the intersection of AI and creative content.

The Biometric Problem

Start with the biology. Voices and faces aren’t just aesthetically distinctive — they are biometrically distinctive. And that distinction matters enormously for how they’re regulated.

Biometric privacy statutes — Illinois’ BIPA being the most litigated, but now joined by laws in Texas, Washington, and a growing list of other states — treat voice prints and facial geometry as a category of sensitive personal data that requires explicit consent before collection, use, or disclosure. These laws were written primarily with employment contexts and consumer technology in mind. They weren’t designed with generative AI training pipelines in the foreground.

But here’s the compliance exposure: training a generative AI model on recordings or images of identifiable individuals can constitute the collection of biometric identifiers under existing statutes — even if no biometric template is explicitly extracted. Plaintiffs’ attorneys have been creative in applying BIPA to contexts its drafters didn’t anticipate, and AI training data is already a litigation frontier. Several class actions targeting AI companies for scraping voice and image data without consent are working their way through courts, and the outcomes will shape how “collection” is defined in this context for years.

For creative industry organizations — studios, labels, ad agencies, platforms — this creates a concrete due diligence obligation: the provenance of AI training data matters, and “we licensed it from a third party” is not a complete answer if that third party collected it without the biometric consents the applicable statutes require.

Publicity Rights: The Other Framework

Separate from biometric privacy — though often confused with it — are rights of publicity. Where biometric laws are fundamentally about privacy and data protection, publicity rights are about the commercial value of identity: the right of an individual to control the use of their name, likeness, voice, and persona for commercial purposes.

Publicity rights are a creature of state law in the United States, and the variation is significant. California’s protections are among the most robust, extending to deceased individuals and applying to voice as well as likeness. New York has historically been narrower. And most states have laws that predate generative AI by decades, built around the paradigm of a photograph or a celebrity endorsement, not a synthetic voice trained on a hundred hours of studio recordings.

The legislative response has been rapid by regulatory standards. Several states have passed or introduced AI-specific publicity rights legislation addressing synthetic voices and likenesses. Tennessee’s ELVIS Act — the Ensuring Likeness Voice and Image Security Act, effective 2024 — is among the most explicitly tailored to the music industry, creating a specific right of action when AI is used to produce a simulated voice without authorization. California has followed with its own suite of AI-related bills targeting the use of digital replicas in entertainment contexts.

The common thread is not new doctrine. It is existing publicity rights doctrine applied with explicit reference to AI-generated output. What the statutes are resolving is the ambiguity that would otherwise require courts to decide whether a synthesized voice or generated likeness counts as a “use” of someone’s identity under laws written before those concepts existed technologically.

The Contract Layer

While legislatures catch up, contracts are doing significant work — and in some respects, are moving faster and with more precision than statutes.

The 2023 SAG-AFTRA strike produced some of the most consequential AI-related contract language in the entertainment industry to date. The eventual agreements addressed the use of digital replicas of performers, requiring consent and compensation for AI-generated versions of an actor’s likeness or voice. The principle established — that a performer’s AI-generated replica is a use of their identity that triggers the same consent and pay obligations as a traditional performance — has become a baseline expectation in ongoing negotiations across the industry.

Music industry contracts are evolving along similar lines. Labels, distributors, and streaming platforms are increasingly confronting questions about what AI-assisted or AI-generated music means for royalty structures, attribution obligations, and the underlying rights in training data. An AI model trained on an artist’s catalog to produce “similar” music raises questions that existing licensing frameworks weren’t designed to answer: is the output a derivative work? Does it infringe the original? Does it matter if the output sounds like the artist even if no specific recording was reproduced?

These aren’t hypotheticals. They are active disputes, some resolved through litigation, some through negotiation, and some simply deferred because neither party wants to be the one to force a court ruling that might go badly.

For compliance purposes, the practical implication is that AI governance in creative industry contexts requires legal review that spans both the IP and privacy domains — and that contracts need to be updated to address AI use cases that didn’t exist when they were drafted. A talent agreement, licensing deal, or production contract that is silent on AI-generated replicas is not neutral; it’s ambiguous in ways that are increasingly likely to become expensive.

Consumer Protection: The Third Pillar

Beyond biometric data and publicity rights, there is a third regulatory framework bearing on AI in creative industries that is often underweighted: consumer protection.

The FTC has been explicit that AI-generated content used in commercial contexts is subject to existing deception standards. If a consumer reasonably believes they are hearing a human performer or a real celebrity voice in an advertisement, and that belief is material to their response to the ad, producing that impression through AI without disclosure can constitute a deceptive practice under Section 5 of the FTC Act. The FTC’s guidance on endorsements and testimonials — updated in 2023 to address AI — makes clear that synthetic or AI-generated endorsements are subject to the same disclosure requirements as human ones.

Several states have enacted or proposed AI disclosure requirements specifically targeting synthetic media. These laws vary in scope, but the general thrust is consistent: when AI-generated content depicting real individuals is used in commercial, political, or public-interest contexts, disclosure is required. The specifics of what that disclosure must look like, where it must appear, and what exceptions apply differ jurisdiction by jurisdiction.

For advertisers and marketers specifically, this creates a compliance surface that sits alongside — and sometimes in tension with — the creative goals of a campaign. “It sounds like them” is no longer a neutral creative choice. It’s a legal question about whether consent was obtained, whether disclosure is required, and whether the applicable publicity rights statute in the jurisdictions where the ad runs creates exposure.

What an AI Governance Framework for Creative Industries Actually Needs

Pulling these threads together, organizations operating at the intersection of AI and creative content need a governance framework that addresses at least four distinct obligation areas — and that recognizes they don’t always point in the same direction.

Biometric compliance: Inventory what AI systems are in use, what training data they rely on, and whether the collection, storage, or processing of voice or image data from identifiable individuals triggers obligations under applicable biometric privacy statutes. This includes reviewing third-party AI tools and data providers, not just internally developed systems.

Publicity rights clearance: Establish a review process for any AI-generated content depicting, simulating, or trained on the identity of real individuals. This process should include jurisdiction-specific analysis — what is permissible in one state may not be in another — and should not assume that licensing underlying recordings or images resolves publicity rights questions, because often it doesn’t.

Contract modernization: Audit existing talent agreements, licensing deals, and platform terms for AI-related gaps. Silence on digital replicas, synthetic voices, and AI training use of contracted content is increasingly a source of dispute. New agreements should include explicit AI provisions; existing ones should be evaluated for amendment.

Consumer-facing disclosure: Develop internal standards for when AI-generated content requires disclosure, how that disclosure is implemented, and how compliance is documented. Don’t wait for a final federal standard — state laws and FTC guidance already create disclosure obligations that are enforceable now.

The Underlying Principle

Haley Fine’s observation — that emerging AI governance in creative industries remains rooted in longstanding privacy, publicity, and consumer protection principles — is both reassuring and clarifying. Reassuring because it means compliance teams don’t need entirely new conceptual frameworks to approach these questions. Clarifying because it means the work is largely about applying well-established principles to contexts where they haven’t yet been formally tested.

The legal infrastructure around synthetic identity is being built in real time, through litigation, legislation, and contract negotiation happening simultaneously across multiple jurisdictions and industries. That process is messy. But for compliance professionals, the underlying question — does someone have the right to control this use of their identity, and has that right been respected — is one that privacy and publicity law have been answering for decades.

Generative AI changed the scale and ease of the problem. It didn’t change the question.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.