Democrats Build Secret Coordination Network to Counter AI Industry Influence at State Level

Table of Contents

As artificial intelligence continues to reshape industries and spark intense regulatory debates, a small but influential group of Democratic state lawmakers has created a private backchannel to align their efforts and push back against heavy lobbying from tech giants.

New York State Assembly Member Alex Bores, who is running for Congress in New York’s 12th District, launched the private Signal group chat called “Frontier AI Legislators” last year. The group now includes nine members from states including New York, Massachusetts, Illinois, Colorado, Vermont, California, and Rhode Island. At least two participants, Bores and California state Sen. Scott Wiener, are also pursuing national office.

This coordination comes at a critical time. Thousands of AI-related bills are being introduced in state legislatures across the country, while Congress and the White House continue to struggle with their own comprehensive AI framework. Tech companies and AI super PACs favoring lighter regulation are pouring record sums into state-level campaigns to defeat candidates who support stricter oversight. In response, lobbyists from the biggest tech firms are flooding statehouses in numbers never seen before.

The group chat serves as a practical tool for lawmakers facing similar challenges in different parts of the country. Members share draft bill language, compare notes on what lobbyists are telling them in each state, and discuss strategies to strengthen protections around AI safety, transparency, and accountability.

Bores explained the motivation behind the group. Lawmakers noticed that industry representatives were approaching different states with slightly varying messages, sometimes softening their positions or emphasizing different risks depending on the audience. By talking directly with each other, the group aims to spot inconsistencies and build a more unified front.

“We all fundamentally agree that the best path forward on AI is one where we’re somewhat coordinated,” Bores said. “There’ll be times where we should be different, but those should be intentional.”

The chat has already influenced real legislative decisions. While drafting New York’s AI safety bill known as the RAISE Act, Bores and his co-sponsor chose to remove a section on third-party audits that they viewed as weakened. They held it back for a future bill after hearing concerns from other group members that such language could become a low standard that other states might adopt or that lobbyists could use as leverage elsewhere.

“That allowed us to think about where some of the lobbyists would use provisions against us in other places, and really have a nationwide conversation,” Bores noted.

Vermont state Rep. Monique Priestley described how the group reduces the isolation many lawmakers feel when tackling complex AI issues. In smaller statehouses, often only one or two legislators take the lead on these bills, going up against some of the most powerful corporations in the world.

“It’s often one or two people in a state house that are trying to lead, and they’re often going up against the biggest entities in the world. So it can be a very isolating experience,” Priestley said. “It’s been incredible to say: Hey, this is happening right now. Are you experiencing the same thing?”

California state Assembly Member Rebecca Bauer-Kahan highlighted another practical benefit — verifying the consistency of industry talking points.

“Are the lobbyists telling us all the same things? Are they telling us different things? Is there consistency? What is real?” she asked.

This kind of real-time information sharing helps lawmakers craft stronger, more informed policies while avoiding unintended consequences that could weaken protections in other jurisdictions.

The absence of Republican members in the group is notable, though Bores has extended an open invitation. He specifically mentioned that lawmakers from states like Utah, Nebraska, and Oklahoma — which are actively working on their own AI measures — would be welcome to join if they want to participate in the conversation.

The rise of this informal network reflects a broader shift in how state-level AI policy is developing. With federal action moving slowly, states have become the primary battleground for rules on high-risk AI systems, algorithmic transparency, data usage in training models, and safeguards against bias or harm.

For privacy and compliance professionals monitoring the regulatory landscape, this development signals that state AI rules are likely to become more consistent in certain areas due to cross-state collaboration, even without a single national law. Companies operating nationwide may soon face overlapping but aligned requirements on risk assessments, audit obligations, and disclosure rules.

At the same time, the heavy industry pushback at the state level underscores the high stakes involved. AI super PAC spending and aggressive lobbying campaigns show that tech firms are treating state legislation as seriously as federal efforts.

As more states move forward with their own AI frameworks, groups like “Frontier AI Legislators” could play a growing role in shaping the direction of those laws. Their focus on coordination aims to prevent a complete patchwork of rules while still allowing states to address local priorities.

Looking ahead, the group may expand its membership and influence as additional lawmakers seek support in navigating the fast-evolving AI space. Whether this leads to stronger consumer protections, clearer compliance obligations for businesses, or a more balanced approach between innovation and safety remains to be seen.

For organizations building AI systems or using AI in their operations, staying alert to these state-level developments is essential. Coordinated Democratic efforts at the state level could accelerate the adoption of requirements around impact assessments, third-party reviews, and transparency that go beyond voluntary industry commitments.

The “Frontier AI Legislators” chat demonstrates how lawmakers are adapting to the challenges of regulating rapidly advancing technology. By creating spaces for collaboration outside traditional channels, they are working to ensure that policy keeps pace with innovation while protecting public interests.

This behind-the-scenes coordination may quietly influence the future of AI governance in America, filling gaps left by slower federal progress and helping shape a more cohesive regulatory environment over time.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.