FTC Takes “Case-by-Case” Approach as Enforcement, Not Rulemaker

Table of Contents

IAPP Organizational Digital Governance Report
The Federal Trade Commission has no intention of becoming a prescriptive regulator in the digital space. That was the clear message from FTC Commissioner Mark Meador at this year’s IAPP Global Summit, where he outlined an enforcement philosophy rooted in spotting and addressing harm rather than laying down sweeping industry rules.

The agency’s posture reflects the broader priorities of the Trump administration’s second term, which has centered on reducing barriers to innovation and competition. Under Chair Andrew Ferguson, the FTC has repositioned itself as a “cop on the beat” — an enforcer, not a regulator.

Speaking in conversation with IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP/US, Meador was direct: the current FTC will not issue “ex ante regulations to set rules of the road for everybody.” The agency, he stressed, is “not looking to step in and tell companies how to run their business.”

When Fennessy pressed him on whether the FTC might introduce more specific security standards or take a firmer stance on AI-generated deception, Meador held the line.

“We’re approaching this as enforcers who, again, are trying to spot harm, address it, prevent it from occurring, and remedy it for the injured consumers as much as we can,” he said.

A Settlement That Speaks for Itself

The agency’s approach was on full display just hours before Meador took the stage, when the FTC announced a settlement with dating app OkCupid and its parent company, Match Group Americas. The complaint alleged the company had shared user data with an unrelated third party without consent — and had actively worked to conceal that practice since 2014. Under the settlement, OkCupid and Match Group are barred from misrepresenting their data collection practices, though no financial penalties were imposed.

“The FTC enforces the privacy promises that companies make,” said FTC Bureau of Consumer Protection Director Christopher Mufarrige. “We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through — even if that means we have to enforce our Civil Investigative Demands in court.”

Meador declined to comment on the settlement when asked directly.

He did note that guidance from the agency will continue to evolve — a reflection not of politics, he argued, but of the pace of change in digital markets.

“You’re going to have new fact patterns, new problems that we hadn’t thought of even a year ago,” he said. “And when those arise, that requires us to figure out how the law adapts to that new situation.”

Age Verification Moves Up the Agenda

Age assurance emerged as a recurring theme at the Summit, and Meador made clear it is firmly on the FTC’s radar. The agency held a workshop on the topic in January, which fed into a subsequent policy statement signaling flexibility around certain COPPA enforcement requirements to allow age verification without always requiring parental consent.

Meador described adult websites as “low-hanging fruit” — platforms with an obvious obligation to keep minors away from explicit content. But beyond that, he acknowledged the question of where verification requirements should apply gets more complicated.

He was, however, dismissive of industry arguments that age verification is technically unworkable.

“Admittedly, I think these are sort of shills for the tech companies,” he said. “They will throw up their hands and say, ‘How are we supposed to make heads or tails of a user’s age? This is so difficult.’ I find that kind of silly.”

His benchmark, he explained, is common social practice — the same norms that govern ID checks for alcohol or R-rated films. “Nobody objects to that as an invasion of privacy,” he added.

On the question of social media bans for minors — a growing trend globally — Meador expressed sympathy but said the FTC’s hands are tied without congressional action.

“So as a parent whose job it is to protect your child … you want to have all the tools in your toolkit to protect them,” he said. “To make sure that if you miss something, not through negligence but because stuff happens, you want to know there’s a backstop. I think that’s why we’ve seen this sort of movement across the state, federal and international landscape.”

AI: Deception, Scams, and Competition

The FTC’s AI enforcement work has largely focused on companies making false claims about their use of the technology — an area Meador said will remain grounded in objective, fact-by-fact analysis.

“The key concept for me is what is a reasonable user’s expectation. What do we actually expect to see and what representations were made at the outset?” he said. “That’s going to be very fact-specific, which is a very unsatisfying lawyerly answer, but it’s true. In order to get it right, we can’t be painting with broad brushstrokes.”

Meador was also careful to push back on treating AI as a single, uniform phenomenon — “a monolith,” as he put it — pointing to what he described as healthy competition in the current model landscape, particularly around training data.

The area where he sees the most immediate enforcement concern is AI-enabled scams, where the technology is dramatically lowering the cost and complexity of running fraud operations.

“It’s lowering the barriers to entry into scamming,” he said. “That’s probably the first place we’re seeing it. … You can use that in a scam to deceive someone, and there it’s less about deceiving them as to the nature of whether that content is real.”

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.