Artificial intelligence is no longer an abstract frontier technology. It is underwriting mortgages, screening tenants, filtering job applicants, approving insurance claims, determining creditworthiness, curating advertisements, and influencing how children interact online. On February 25, 2026, Connecticut Attorney General William Tong issued a sweeping memorandum clarifying a simple but powerful point: existing laws already apply to artificial intelligence, and Connecticut will enforce them.
The memorandum does not create new statutes. Instead, it sends a clear signal to state agencies, businesses, developers, and residents that AI systems are not exempt from long-standing civil rights protections, privacy obligations, consumer protection statutes, data security mandates, and antitrust rules. The document functions as both a warning and a roadmap — a warning to companies that algorithmic opacity will not shield unlawful conduct, and a roadmap for how Connecticut intends to regulate AI using the tools already embedded in its legal framework.
AI’s Rapid Expansion — and Its Expanding Risks
The memorandum opens by acknowledging the breathtaking speed at which AI has expanded across industries. From large language models and predictive analytics systems to automated decision engines embedded in everyday platforms, AI is increasingly integrated into consequential decisions about housing, employment, lending, healthcare, insurance, education, and public accommodations.
Yet alongside innovation, the Attorney General highlights the tangible harms already surfacing. AI systems have been linked to discriminatory outcomes, biased decision-making, nonconsensual synthetic imagery, disinformation campaigns, exploitative pricing models, deceptive advertising, and privacy intrusions. The memo underscores that while individuals may use AI casually to generate content or seek quick answers, businesses deploy AI for high-stakes commercial decisions that can materially affect people’s lives.
The message is unmistakable: when AI systems shape real-world outcomes, they must operate within the guardrails of law.
Civil Rights Laws Apply to Algorithms
One of the memorandum’s most forceful declarations is that antidiscrimination laws apply to algorithmic systems in exactly the same manner as they apply to human decision-makers. The fact that a decision emerges from a model rather than a manager does not diminish liability.
Connecticut’s civil rights statutes prohibit discrimination in employment, housing, public accommodations, insurance, lending, education, healthcare, and other domains. Protected characteristics include race, color, religion, age, sex, pregnancy, marital status, national origin, ancestry, disability, lawful source of income, veteran status, sexual orientation, and gender identity or expression.
If an AI-driven hiring tool disproportionately excludes applicants from protected groups, if a tenant screening algorithm produces disparate housing outcomes, or if automated underwriting systems disadvantage certain communities without lawful justification, those outcomes may constitute unlawful discrimination.
The memorandum also emphasizes that federal statutes — including the Equal Credit Opportunity Act, Fair Housing Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act — remain fully enforceable. The rescission of federal AI-specific guidance does not eliminate underlying statutory protections. Algorithmic discrimination remains discrimination.
In practical terms, this means companies deploying AI in hiring, credit scoring, insurance underwriting, healthcare determinations, or public accommodations must conduct bias testing, audit outputs for disparate impact, and maintain documentation supporting lawful decision criteria. “Black box” opacity will not serve as a defense.
The Connecticut Data Privacy Act and AI Governance
The Connecticut Data Privacy Act (CTDPA) forms the backbone of the state’s AI-related privacy framework. The memorandum makes clear that AI developers, integrators, and deployers are subject to the same data governance standards as other data controllers and processors.
Under the CTDPA, Connecticut residents possess enforceable rights over their personal data. These include the right to access personal information collected about them, correct inaccuracies, delete personal data (including information obtained through third parties), obtain copies of their data, and opt out of certain processing activities.
Importantly, consumers have the right to opt out of the sale of personal data, targeted advertising, and profiling that produces legal or similarly significant effects. AI systems used for automated decision-making fall squarely within this profiling category when they materially impact housing, credit, employment, insurance, healthcare, or essential services.
For AI developers, the implications are substantial. If a model is trained on personal data collected from Connecticut residents, that use must have been properly disclosed. If a company purchases third-party datasets from brokers containing Connecticut consumer information, the original data collection must have been lawfully disclosed and consented to. Otherwise, downstream use may be unlawful.
Additionally, controllers must limit data collection to what is adequate, relevant, and reasonably necessary for disclosed purposes. AI systems trained on massive data lakes without meaningful purpose limitation may face scrutiny under this requirement.
Sensitive Data and AI Systems
The memorandum devotes particular attention to sensitive data categories, including consumer health data, biometric identifiers, precise geolocation information, and data revealing race, ethnicity, religious beliefs, sexual orientation, immigration status, or citizenship.
AI systems cannot process sensitive data without affirmative consent. That requirement becomes particularly significant in contexts such as facial recognition systems, emotion-detection tools, predictive health analytics, and behavioral profiling systems.
The memo further highlights heightened protections for minors. Companies offering AI-powered services to children must exercise reasonable care to avoid foreseeable harms. Design features that intentionally increase, sustain, or extend a minor’s engagement may violate statutory protections. Additionally, effective July 1, 2026, businesses are prohibited from profiling minors in connection with automated decisions related to financial services, housing, insurance, education, employment, healthcare, or access to essential goods unless such processing is strictly necessary to provide the service.
This provision alone signals that AI-driven youth targeting and profiling will receive close regulatory scrutiny.
Data Protection Assessments and Risk Governance
The CTDPA requires data protection assessments for processing activities that present heightened risks of harm. AI systems that profile individuals in ways that could cause financial injury, produce unlawful disparate impact, or intrude upon private affairs fall squarely within that high-risk category.
This means companies deploying AI models affecting Connecticut residents must conduct formal risk assessments evaluating potential harms, bias risks, mitigation strategies, and safeguards. Documentation of these assessments may become central in enforcement proceedings.
In addition to assessment obligations, businesses must implement reasonable administrative, technical, and physical safeguards. AI models trained on large volumes of personal data must incorporate strong cybersecurity protections to guard against data leaks, prompt injection exploits, model inversion attacks, and unauthorized data extraction.
Safeguards Law and Breach Notification Obligations
Beyond the CTDPA, Connecticut’s Safeguards Law imposes affirmative obligations on entities that possess personal information to protect that data from misuse. AI systems that ingest or store personal data fall within the scope of these obligations.
If an AI system inadvertently exposes sensitive information through outputs, model vulnerabilities, or data leaks, Connecticut’s Breach Notification Law may require prompt notification to affected individuals and regulators. Recent amendments broaden the definition of personal information to include login credentials and biometric data — both highly relevant to AI systems utilizing facial recognition or authentication tools.
Failure to notify appropriately can trigger enforcement actions, civil penalties, and reputational harm.
CUTPA: Deception, Manipulation, and AI Marketing
The Connecticut Unfair Trade Practices Act (CUTPA) provides one of the broadest enforcement tools in the Attorney General’s arsenal. CUTPA prohibits unfair or deceptive acts or practices in trade or commerce and provides both private rights of action and sovereign enforcement authority.
Under CUTPA, AI-related conduct may be unlawful if it misrepresents product capabilities, fabricates consumer reviews, deploys deceptive deepfake endorsements, engages in predictive price manipulation during emergencies, or falsely advertises income-generating opportunities tied to AI systems.
The memorandum explicitly references deceptive business opportunity schemes in which AI tools are marketed as guaranteed income generators. Misrepresentations regarding AI’s effectiveness, utility, or performance could constitute actionable deception.
Additionally, AI-powered robocalls or automated marketing campaigns that violate telecommunications restrictions may trigger liability.
Antitrust Enforcement and Algorithmic Collusion
The memorandum’s discussion of the Connecticut Antitrust Act (CAA) signals that algorithmic pricing and AI-driven market coordination are under active scrutiny.
AI systems capable of dynamically adjusting prices across competitors may facilitate tacit collusion. If businesses coordinate through algorithmic tools to fix prices, allocate markets, or suppress competition, they may violate state and federal antitrust statutes.
The memo references ongoing litigation involving algorithmic rental pricing tools, highlighting how software-driven coordination may unlawfully stabilize or inflate housing prices. The warning extends beyond housing: retail, consumer goods, digital advertising, and service industries are all potential arenas for AI-enabled anticompetitive conduct.
Businesses deploying AI pricing systems must ensure that they operate independently and do not facilitate unlawful market coordination.
Notable Enforcement Actions and Coalition Leadership
The memorandum situates AI enforcement within a broader pattern of regulatory action by the Connecticut Attorney General’s Office. The office has pursued litigation against major technology companies, participated in multistate antitrust coalitions, and issued bipartisan warnings regarding AI-enabled election robocalls and scam advertising.
These actions demonstrate that Connecticut is not approaching AI in isolation but as part of a comprehensive consumer protection, competition, and civil rights enforcement strategy.
Audit Your AI Systems Now
For organizations deploying AI in Connecticut, the implications are concrete:
- Audit AI systems for bias and disparate impact.
- Ensure privacy notices clearly disclose AI-driven data uses.
- Implement opt-out mechanisms for profiling and targeted advertising.
- Obtain affirmative consent before processing sensitive data.
- Conduct documented data protection assessments for high-risk AI processing.
- Strengthen cybersecurity controls around AI training data and outputs.
- Avoid deceptive marketing claims about AI capabilities.
- Review algorithmic pricing systems for antitrust compliance.
Compliance cannot be reactive. It must be designed into AI governance frameworks from inception.
Setup AI Governance With Captain Compliance
For Connecticut residents, the memorandum reinforces that AI does not diminish individual rights. Consumers retain control over their personal data, possess rights to access and deletion, and are protected from discriminatory or deceptive uses of automated systems. Software developed by Captain Compliance and their engineers here can help automate your requirements and keep you compliant.
The Broader Significance
Attorney General Tong’s memorandum reflects a broader national trend: states are stepping into the AI governance arena even as federal policy remains fragmented. By applying existing statutes to emerging technologies, Connecticut is asserting that innovation does not occur in a legal vacuum.
The document is neither binding nor precedential, but it functions as a policy statement and enforcement signal. It underscores that algorithmic opacity will not shield misconduct, that automation does not excuse discrimination, and that evolving technology must remain accountable to established law.
As AI systems continue to evolve, so too will regulatory interpretations. The memorandum focuses on current statutory frameworks, but it also implicitly invites legislative updates where gaps emerge.
Tong Wants Accountability in the Age of Automation
Artificial intelligence may be new, but the principles governing fairness, transparency, privacy, and competition are not. Attorney General Tong’s memorandum affirms that Connecticut’s legal infrastructure is robust enough to confront algorithmic harms — and that enforcement will not wait for bespoke AI statutes. Individuals who believe they have been harmed by unlawful AI use may file complaints with the Office of the Attorney General.
For businesses operating in Connecticut they need to embed compliance into AI systems from design to deployment. For consumers, the message is equally clear: your rights travel with you into the digital age.
As AI becomes more deeply embedded in everyday life, Connecticut has signaled that accountability will scale alongside innovation.
