Way back in the summer of 1987, the Supreme Court nomination of Robert Bork turned into a spectacle of rented VHS tapes—his taste for “Charly” and “Debbie Does Dallas” splashed across front pages like some twisted character assassination. It was the kind of invasion that made you double-lock your Blockbuster returns, and out of that mess came the Video Privacy Protection Act, or VPPA—a scrappy little law meant to shield your couch-potato secrets from prying eyes. Fast-forward nearly four decades, and here we are in the AI race, with streaming queues replacing tape racks, and AI not just watching what you binge, but decoding the very code that hides it. The result? A legal thicket where old protections clash with new tech, turning straightforward privacy suits into debates over whether a chatbot counts as an “ordinary person.” As courts grapple with this, the VPPA’s bite is getting fuzzier, and we’re all left wondering: In a world where machines read between the lines, does your watch history still belong to you?
Enacted in the wake of Bork’s Borking, the VPPA was Congress’s blunt instrument against video voyeurism. At its core, it slaps civil penalties—up to $2,500 a pop, plus attorney’s fees—on any “video tape service provider” that knowingly discloses a customer’s personally identifiable viewing info without consent. Back then, that meant rental shops spilling rental lists to journalists. Today, it’s morphed into a catch-all for the digital age: Netflix sharing binge data with advertisers, Roku piping ESPN stats to trackers, or Meta slurping up clip views from sports sites. With no sweeping federal privacy law on the books, the VPPA has ballooned into a go-to for plaintiffs’ lawyers, fueling class actions against Big Tech that rake in millions. It’s broad, it’s punitive, and it’s survived challenges by leaning on its plain language—covering any entity that rents, sells, or “provides” video content, from dusty DVDs to TikTok scrolls.
But enter AI, and the plot thickens like a poorly rendered deepfake. Recommendation engines, those eerily spot-on suggesters on YouTube or Hulu, already pushed the law’s edges by sharing snippets of your tastes with third parties. Now, generative whizzes like ChatGPT are turbocharging the trouble, turning gibberish code into gossip fodder. Picture this: A URL fragment from Facebook, mangled into something like “title%22%3A%22-%E2%96%B7%20The%20Roast%20of%-20Ric%20Flair”—to most of us, it’s digital detritus, not a smoking gun on your wrestling obsession. Paste it into an AI, though, and out pops the title, clear as day, complete with inferences about your late-night viewing habits. Does that count as a “disclosure” under the VPPA? The law hinges on whether the info is identifiable to an “ordinary person,” but AI blurs that line, making the esoteric everyday. As Seton Hall’s Brian Sheppard puts it, these tools “could sort of raise the bar of what an ordinary person can do,” leaving liability standards “that isn’t as fixed.” Suddenly, what’s “knowingly” shared—and to whom—feels like a Rorschach test for judges.
The courts are already fracturing over this, with AI poking at the cracks. The First Circuit, in a 2016 ruling, took a generous view: If a recipient could “foreseeably” decode the data to sniff out viewing prefs, it’s a violation. Plaintiffs love it—easy wins for cases where a pixelated Facebook ID hints at your Hulu queue. But the Second Circuit, in a May 2025 smackdown on Flipps Media’s Meta-sharing antics, drew a harder line: No dice if it takes “sophisticated means” to unravel, like URL unescaping wizardry. “It is implausible that an ordinary person would look at [that code] and understand it to be a video title,” they scoffed. The Ninth Circuit’s nodding along, dismissing Roku’s ESPN handoff as too opaque for VPPA wrath.
AI’s wildcard status is supercharging the split. Down in California’s Central District, Judge David O. Carter threw a curveball in a SportsEdTV suit, letting a claim advance because, hey, “a growing proportion of the public has the technological fluency to discern information from lines of code, particularly with the advent of artificial intelligence tools.” On September 16, another Cali judge greenlit a Meta-sharing beef against the same outfit, signaling the West Coast’s openness to AI-fueled reads. Yet the Second Circuit doubled down in June, batting away ChatGPT as a game-changer in an NFL class action: “The existence of tools like ChatGPT would not alter our conclusion.” It’s a patchwork quilt of precedents, where your case’s fate might hinge on the docket draw. Perkins Coie’s Nicola Menaldo, who’s defended Amazon and Google in these scraps, predicts “another circuit split” as AI creeps in—splitting hairs over whether a bot’s brain counts as ordinary smarts.
Experts are buzzing with unease. UCLA’s Andrew Selbst blasts the Second Circuit’s “ordinary person” as a relic that “would essentially eviscerate the test” for forbidden leaks—if folks won’t even Google a clue, privacy’s toast. He floats juries hashing out code-cracking feasibility, turning trials into tech demos. Drexel’s Anat Lior tempers the hype: AI’s no mind-reader yet, prone to “hallucinations” that garble translations. “AI hasn’t gotten to the point where it can translate coded information perfectly,” she warns, but if it does, “privacy as a principle takes a strong hit.” Plaintiffs’ firebrand Isaac Manoff, steering the SportsEdTV fight, sees a “split within the split”—the Ninth might mimic the Second’s test but land First Circuit-style punches. Also to add in more complexities law firms that use the California Invasion of Privacy Act and Electronic Communications Privacy Act claims are also targeting VPPA claims. The most notable firms being Swigart, Pacific Trial Attorneys, Almeida Law, Tauler Smith, and Korsinsky are ensuring that if businesses don’t respect users privacy even with video’s playing on their sites they will have to pay up for violating consumers privacy without their permission.
For companies, it’s a compliance nightmare: Streaming giants and social platforms must now audit AI pipelines, weighing if a recommendation tweak exposes viewing vibes. Plaintiffs get a boost—class certs easier in plaintiff-friendly circuits—but proving “knowing” disclosure amid algorithmic black boxes? That’s a slog. And defendants? They hunker in safe-harbor states, lobbying for clarity while bracing for the next wave of suits. With VPPA dockets swelling because of law firms like the ones we mentioned above—hundreds filed yearly, netting settlements from Hulu’s $8 million pot to Paramount’s $15 million splash—the stakes are sky-high for non-compliance.
Peering ahead, this AI tangle underscores the VPPA’s vintage bones straining against silicon muscles. Without a federal privacy overhaul, it’ll keep limping along as a privacy proxy, but courts might force Congress’s hand via Supreme Court showdowns over these splits. Sheppard muses that as AI sharpens our data-digging, the law’s fixed thresholds melt: “depending on how much better we get at processing data.” For now, it’s a reminder that privacy isn’t static—it’s a chase where tech sets the pace. Next time you hit play on that guilty-pleasure doc, whisper a thanks to Bork’s ghost and maybe clear your cache. In the AI blur, yesterday’s secrets are tomorrow’s headlines and yet another set of governance and complications that Captain Compliance can help you protect against.