Artificial intelligence regulation has rapidly coalesced around a familiar structure: ex ante risk assessment, documentation, and mitigation. Legislators and regulators increasingly require organizations to identify foreseeable algorithmic risks, evaluate their likelihood and severity, and implement controls designed to prevent harm before it occurs. This framework now dominates AI governance discourse in the United States and abroad.
Yet this emphasis on prevention obscures a fundamental reality. Even well-designed systems, subjected to careful risk assessment, still cause harm. Algorithms are deployed in dynamic environments, trained on imperfect data, integrated into complex supply chains, and used in ways that evolve over time. When harm occurs despite preventive efforts, the legal system must answer a different question: what remedies are appropriate once algorithmic harm has already happened?
The Missing Half of AI Regulation
This question has received far less sustained attention than questions of design and governance. Traditional remedies—damages, injunctions, recalls, exclusionary rules—do not map cleanly onto algorithmic systems. Algorithms are not single products, discrete acts, or unitary actors. They are technical artifacts embedded in sociotechnical systems, reused, licensed, fine-tuned, and redeployed across organizational boundaries.
In this remedial gap, one concept has taken on outsized prominence: algorithmic disgorgement. First invoked by the Federal Trade Commission in 2019, algorithmic disgorgement has since become a favored remedy for a wide range of algorithmic harms. Regulators, litigants, and commentators increasingly treat the destruction of models as a general solution for AI-related misconduct.
This article examines that expansion with a critical but neutral lens. It does not ask whether algorithmic disgorgement is ever appropriate. Instead, it asks whether the remedy has been stretched beyond the harms it can meaningfully address, and whether its growing use risks undermining the very remedial principles it purports to advance.
The Appeal of Algorithmic Disgorgement
Algorithmic disgorgement is intuitively appealing. When an algorithm causes harm, destroying the model promises a clean break from the offending conduct. It appears decisive, preventative, and morally satisfying. The remedy signals seriousness, avoids ongoing supervision, and aligns with a broader regulatory impulse to “reset” systems that have gone awry.
For regulators, algorithmic disgorgement offers practical advantages. It avoids the difficulties of auditing opaque systems, eliminates the risk of continued harm, and sidesteps complex questions about model modification or retraining. In enforcement actions, it provides a visible and easily communicable outcome.
For litigants, the remedy offers leverage. The threat of model destruction can dramatically alter settlement dynamics, particularly when an algorithm underlies a core business function. As a result, algorithmic disgorgement has gained traction not only as an imposed remedy but also as a bargaining chip.
These advantages help explain the remedy’s rapid diffusion. But intuitive appeal is not the same as remedial fit. A remedy must do more than signal condemnation; it must address the harm in a way that is responsive, proportional, and consistent with underlying legal principles.
Data-Based Algorithmic Disgorgement and the Disgorgement Principle
Data-based algorithmic disgorgement targets harms arising during a model’s creation, particularly where training data was collected unlawfully or in violation of privacy or consumer protection laws. Its justification tracks traditional disgorgement logic: no party should retain benefits derived from illegal conduct.
In theory, destroying a model trained on unlawfully obtained data strips the wrongdoer of the value created through misconduct. In practice, however, the algorithmic context complicates this logic.
First, the value of an algorithm is rarely attributable solely to its training data. Model architecture, engineering effort, fine-tuning, and integration into downstream systems all contribute to its utility. Destroying the model may remove far more value than the illicit data contributed, overshooting the disgorgement principle.
Second, algorithmic value is often diffuse. Models trained on mixed datasets—some lawful, some not—raise difficult questions about proportionality. Full destruction treats all contributions as tainted, even where unlawful data played a marginal role.
Third, the entity ordered to destroy the model may not be the entity that unlawfully collected the data. In an algorithmic supply chain, data brokers, model developers, and deployers may be distinct actors. Disgorgement imposed on downstream parties risks penalizing entities that neither engaged in nor benefited from the underlying misconduct.
These dynamics undermine the core purpose of disgorgement. Instead of stripping ill-gotten gains from wrongdoers, data-based algorithmic disgorgement often imposes heavy costs on innocent or peripheral actors while failing to calibrate the remedy to the actual benefit derived from unlawful conduct.
Use-Based Algorithmic Disgorgement and the Consumer Protection Principle
Use-based algorithmic disgorgement addresses harms arising from a model’s deployment or use, such as discriminatory outcomes or deceptive automated decisions. Its logic resembles consumer protection remedies like product recalls: remove a harmful product from circulation to prevent further injury.
This analogy, however, breaks down under closer scrutiny. Algorithms are not static consumer products. They are adaptable systems whose behavior can change with new data, altered parameters, or modified deployment contexts.
Destroying a model in response to harmful use may prevent that specific instance of harm, but it does not necessarily address the underlying cause. If the harm arises from deployment decisions, incentive structures, or downstream integration, removing the model may simply shift the problem to a replacement system.
Moreover, use-based disgorgement often misallocates responsibility. The entity ordered to destroy the model may have limited control over how it was used, particularly where models are licensed or embedded in third-party systems. In such cases, the remedy fails to impose costs on the actors best positioned to prevent future harm.
As a result, use-based algorithmic disgorgement frequently falls short of consumer protection goals. It may eliminate a particular artifact without improving systemic practices, leaving similar harms likely to recur.
Collateral Effects in the Algorithmic Supply Chain
The algorithmic supply chain magnifies the unintended consequences of disgorgement. Because models are shared, adapted, and reused, destroying one instance can ripple outward in unpredictable ways.
Downstream users may lose access to tools they rely on, even where they have complied with legal obligations. Upstream developers may face reputational or economic harm disproportionate to their role. In some cases, entire classes of applications may be disrupted by the removal of a single model.
At the same time, the parties most responsible for the harm may remain insulated. Data collectors, system integrators, or decision-makers who shaped deployment conditions may avoid meaningful accountability.
These outcomes invert traditional remedial principles. Instead of aligning burden with blame, algorithmic disgorgement often disperses costs widely while diluting deterrence. For the in depth 64 page breakdown we recommend diving into the piece here from Christina Lee here.
When Disgorgement Undermines Its Own Principles
At its core, algorithmic disgorgement aspires to vindicate familiar remedial principles: depriving wrongdoers of ill-gotten gains and preventing ongoing harm to consumers. Yet when applied without sensitivity to the nature of algorithmic systems, the remedy often undermines these very principles.
Data-based algorithmic disgorgement purports to mirror monetary disgorgement and exclusionary rules by eliminating the benefits of unlawful conduct. But because algorithmic value is cumulative, shared, and frequently attributable to lawful inputs and labor, destruction often overshoots. The remedy may strip value unrelated to the misconduct while failing to calibrate punishment to benefit.
Use-based algorithmic disgorgement, by contrast, purports to function like a product recall. Yet unlike defective products, algorithms are rarely defective in isolation. Harmful outcomes often emerge from the interaction between models, data, institutional incentives, and deployment contexts. Removing a model without addressing these surrounding factors does little to reduce future risk.
In both cases, the remedy risks becoming symbolic rather than corrective. It appears forceful while leaving underlying practices unchanged. In extreme cases, it may even discourage remediation by making improvement efforts futile once destruction becomes the default response.
Toward a Framework for Evaluating Algorithmic Remedies
If algorithmic disgorgement is often ill-suited to algorithmic harm, the question becomes how regulators and courts should evaluate alternative remedies. The analysis suggests two core considerations.
First, responsiveness to harm. A remedy should directly address the mechanism by which harm occurred. If the harm arises from unlawful data collection, remedies targeting data practices and derived value may be appropriate. If the harm arises from deployment decisions, incentives, or governance failures, remedies should focus on those levers rather than on model destruction.
Second, impact across the algorithmic supply chain. Remedies should account for how costs and burdens propagate. A well-calibrated remedy imposes costs on blameworthy actors and minimizes collateral damage to innocent parties. Where a remedy predictably disperses harm or shields those responsible, it should be reconsidered.
These considerations do not preclude strong enforcement. They call for precision. Algorithmic systems demand remedies that are as modular and context-sensitive as the systems themselves.
The Need for a Diverse Remedial Toolkit
The dominance of algorithmic disgorgement reflects a deeper problem: a lack of developed alternatives. As AI regulation matures, enforcement must move beyond a binary choice between inaction and destruction.
Potential remedies include targeted data deletion, mandated model retraining, deployment restrictions, auditing obligations, governance reforms, and structural injunctions. Each carries its own tradeoffs, but together they offer a more flexible and effective response to algorithmic harm.
Diversity in remedies also promotes legitimacy. When enforcement actions align remedies with harms, they reinforce the credibility of regulation. When remedies appear mismatched or excessive, they invite resistance and undermine compliance incentives.
Algorithmic Disgorgement
Algorithmic disgorgement has emerged as a powerful symbol in AI enforcement. Its rise reflects legitimate frustration with the limits of traditional remedies and the urgency of addressing algorithmic harm.
But power and appropriateness are not the same. When applied without regard to the structure of algorithmic systems and supply chains, disgorgement often fails to remedy harm, misallocates costs, and weakens the principles it seeks to uphold.
This article does not argue against algorithmic disgorgement categorically. It argues for restraint and discernment. Remedies should respond to harms as they actually occur, not as analogies suggest they should.
As regulators and courts confront the next generation of AI-related cases, the challenge will not be identifying misconduct alone. It will be choosing remedies that correct, deter, and restore without causing new harms in the process.
When algorithms harm, the remedy should fit the system.