At public events and policy roundtables, DeSantis has argued that Florida has both the authority and the responsibility to act, framing AI governance as a matter of consumer protection, transparency, and public safety. His position reflects a broader trend among states that are unwilling to wait for Congress to settle the debate over AI regulation before setting their own guardrails.
What Florida’s AI Proposals Aim to Regulate
- Transparency requirements: Consumers would have the right to know when they are interacting with AI systems, such as automated chat tools or AI-driven decision engines.
- Limits on AI-based therapy claims: AI systems would be restricted from presenting themselves as licensed mental health or therapy providers.
- Protections for minors: Proposals include parental controls and additional safeguards for children using AI-powered platforms.
- Likeness and identity safeguards: Restrictions on using a person’s name, image, or likeness without consent.
- Insurance and high-impact decisions: Guardrails on using AI as the sole basis for certain insurance-related determinations.
- Foreign-entity restrictions: Limits on government use of AI systems tied to certain foreign actors.
- AI data center policy: Proposals that would restrict taxpayer subsidies or incentives for large-scale AI data centers.
Why this matters: Florida’s approach blends consumer rights, data governance, and infrastructure policy. That mix signals that AI regulation will not be limited to software behavior alone, but will increasingly reach contracts, procurement, and physical infrastructure.
The Federal Push for AI Uniformity
Florida’s push comes as federal officials emphasize the risks of a fragmented regulatory landscape. Recent executive actions have encouraged federal agencies to challenge or discourage state AI laws that conflict with national policy goals, with some proposals tying federal funding to compliance with federal AI priorities.
DeSantis has rejected the idea that an executive order can override state legislative authority, arguing that only Congress can preempt state law. He has also criticized proposals that would pause or restrict state AI regulation for extended periods, characterizing them as protective of large technology companies rather than consumers.
Business reality: Even if federal courts ultimately limit state authority, companies should not assume a compliance “safe harbor” in the near term. State-level proposals often influence enforcement expectations, litigation theories, and consumer advocacy regardless of their final legal status.
Why This Matters Beyond Florida
Florida is unlikely to be the last state to pursue an AI “bill of rights.” Other states are already exploring similar concepts focused on transparency, minors’ protections, biometric identity, and automated decision-making. Florida’s framing may serve as a template—especially in politically aligned states—that accelerates a patchwork of overlapping AI obligations.
For companies operating across state lines, the challenge is not complying with one AI law, but managing variability without rebuilding systems for each jurisdiction.
AI compliance is converging with privacy compliance. Companies that already struggle with consent management, disclosure consistency, and audit documentation are likely to face amplified risk as AI rules expand.
What Florida Companies Should Consider When Deploying AI Systems
Organizations deploying AI—particularly in consumer-facing or high-impact contexts—should take concrete steps now rather than waiting for legal certainty.
1) Inventory AI use cases
- Identify where AI interacts directly with users or influences outcomes.
- Document which systems rely on automation versus human review.
2) Implement transparency controls
- Standardize disclosures when AI is in use.
- Ensure notices are consistent across websites, apps, and customer communications.
3) Protect minors and sensitive users
- Introduce age-aware design and parental controls where applicable.
- Establish guardrails for health-adjacent or emotionally sensitive AI use cases.
4) Govern high-impact decisions
- Require human oversight for insurance, eligibility, or similar determinations.
- Maintain audit trails showing how AI outputs are reviewed.
5) Align contracts and vendors
- Review AI vendor agreements for transparency, control, and audit rights.
- Assess infrastructure exposure, including data center and compute dependencies.
To centralize disclosures, consent, and governance evidence across evolving AI and privacy laws, explore and use software like Captain Compliance to stay compliant.
CaptainCompliance.com
How Captain Compliance Supports AI Governance
Captain Compliance helps organizations operationalize compliance by centralizing transparency, consent management, and audit-ready workflows that adapt as laws change.
- Consistent disclosures across jurisdictions and digital properties
- Centralized consent and preference management
- Evidence-driven compliance operations for regulators and litigation defense