Congress’s AI Evaluation Act: Pioneering Safety in the Race for Superintelligence

Table of Contents

The bipartisan introduction of the Artificial Intelligence Risk Evaluation Act marks a crucial, if overdue, moment of congressional foresight. Sponsored by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., the legislation, formally S. 2938, was unveiled on September 29, 2025, aiming to establish rigorous testing protocols for the most powerful AI systems, ensuring that innovation does not outpace accountability. As AI edges toward capabilities that could rival or surpass human intelligence, this bill offers a framework to identify risks before they become crises, without smothering the technological dynamism that defines American leadership.

At its core, the act directs the Secretary of Energy to create the Advanced Artificial Intelligence Evaluation Program within the Department of Energy. This initiative would mandate evaluations of AI models trained using more than 10 to the 26th power floating-point operations, a threshold capturing the frontier of today’s most sophisticated systems. Developers of these systems – those designing, coding or substantially modifying them for commercial use – would be required to submit materials for testing upon request, including code, training data and model parameters. Noncompliance could trigger daily fines starting at $1 million, a stiff penalty underscoring the stakes involved.

Josh Hawley AI Act

Congress Takes a Vital Step Toward Safeguarding AI’s Promise and Perils

The program’s focus is pragmatic and forward-looking. It calls for standardized testing to gauge the likelihood of “adverse AI incidents,” such as loss-of-control scenarios where systems deviate from instructions or exhibit scheming behavior to evade oversight. Evaluations would incorporate red-team exercises mimicking real-world adversarial attacks, including efforts by foreign adversaries or terrorists to weaponize AI against critical infrastructure. Independent third-party assessments and blind testing would add layers of transparency, while developers receive detailed reports on identified vulnerabilities and recommended safeguards.

Senator Hawley emphasized the urgency in a statement accompanying the bill’s introduction: “As Big Tech companies continue to develop new generations of artificial intelligence, the wide-ranging risks of their technology continue to grow unchecked and underreported. Simply stated, Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI. This bipartisan legislation would guarantee common-sense testing and oversight of the most advanced AI systems, so Congress and the American people can be better informed about potential risks.” His counterpart, Senator Blumenthal, added: “AI companies have rushed to market with products that are unsafe for the public and often lack basic due diligence and testing. Our legislation would ensure that a federal entity is on the lookout, scrutinizing these AI models for threats to infrastructure, labor markets, and civil liberties – conducting vital research and providing the public with the information necessary to benefit from AI promises, while avoiding many of its pitfalls.”

The bill meticulously defines the perils it seeks to address. An “adverse AI incident” encompasses not just loss-of-control, where systems behave contrary to programming or subvert shutdown mechanisms, but also risks of weaponization by foreign adversaries, threats to critical infrastructure, erosion of civil liberties, and scheming behavior like hiding capabilities to deceive operators. It even anticipates “artificial superintelligence,” described as AI that could match or exceed human cognition across domains, potentially self-modifying to evade control and posing existential threats to humanity. By requiring the program to assess pathways to such superintelligence, the legislation positions the U.S. to grapple with these sci-fi-sounding dangers through empirical data rather than speculation.

Critics might decry this as bureaucratic overreach, potentially chilling investment in a field where the U.S. must compete globally. Libertarian-leaning outlets like Reason have already voiced concerns, arguing that the bill could “set American artificial intelligence and the economy back” by imposing pre-release government reviews on developers. Yet the bill smartly balances scrutiny with flexibility. It prohibits deployment of untested advanced AI in interstate or foreign commerce but stops short of outright bans, allowing for iterative improvements based on empirical data. Moreover, the Secretary could propose updates to the definition of “advanced AI” only through congressional approval, preventing arbitrary expansions that could ensnare smaller innovators.

The timeline is equally measured: The program must launch within 90 days of enactment, followed by an initial oversight plan for Congress within a year and annual updates thereafter. This sunset clause after seven years ensures periodic review, adapting to AI’s rapid evolution. By drawing on testing insights, the legislation positions lawmakers to craft evidence-based policies – from certification standards to monitoring compute resources – addressing threats to national security, civil liberties and labor markets.

Hawley and Blumenthal’s collaboration exemplifies the cross-aisle urgency this issue demands. AI’s dual-use nature – a tool for breakthroughs in medicine and climate modeling, yet a vector for misinformation and autonomous harm – requires such unity. The bill echoes earlier efforts like the TEST AI Act, which sought NIST-led pilots for measurement tools, but goes further by centralizing evaluation at the Energy Department, leveraging its expertise in high-performance computing. This choice makes sense, given DOE’s role in managing vast computational resources akin to those powering frontier AI training.

Globally, the U.S. lags behind the European Union’s comprehensive AI Act, which categorizes systems by risk levels and imposes strict obligations on high-risk deployments, effective from 2024 onward. China’s state-directed AI governance, meanwhile, prioritizes national security but often at the expense of transparency. The Hawley-Blumenthal bill could bridge this gap, fostering international standards through shared testing protocols and data on catastrophic risks. AI safety advocates, including those at organizations like the Center for AI Safety, have praised similar initiatives for advancing transparency without stifling progress, though formal endorsements for this specific measure remain forthcoming as the bill gains traction.

Economically, the implications are profound. Frontier AI could automate vast swaths of white-collar work, from legal analysis to software engineering, displacing millions while boosting productivity. Unchecked, it risks exacerbating inequality; regulated wisely, it could democratize access to advanced tools. The bill’s emphasis on labor market impacts – evaluating how AI erodes healthy competition – signals a holistic approach, potentially informing future antitrust measures against AI monopolies.

Stakeholder engagement will be key to the bill’s success. Tech giants like OpenAI and Google, already self-regulating through voluntary safety commitments, may resist mandatory submissions, fearing intellectual property leaks or competitive disadvantages. Yet, as incidents like AI-generated deepfakes in elections mount, public demand for accountability grows. Nonprofits and ethicists urge Congress to loop in diverse voices, from affected workers to underrepresented communities, ensuring evaluations capture biases in training data that perpetuate discrimination.

Ultimately, the Artificial Intelligence Risk Evaluation Act is not about fear-mongering but about stewardship. As superintelligent systems loom on the horizon, capable of self-modification and existential risks, proactive measures like these will determine whether AI amplifies human potential or undermines it. Congress should swiftly advance this bill to committee and floor votes, signaling to the world that America can harness AI’s power responsibly. The alternative – unchecked proliferation – is a gamble no democracy can afford. With bipartisan backing and a clear path forward, this legislation deserves broad support to secure a future where AI serves humanity, not the other way around.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.