Europe’s latest anxiety over frontier artificial intelligence has taken on the shape of an access fight.
Regulators want to understand what the most advanced AI systems can do when pointed at cyber vulnerabilities. OpenAI is reportedly offering the European Commission access to a new model with cyber-defense capabilities. Anthropic, meanwhile, is facing pressure over Claude Mythos, a system that has drawn concern in Brussels because of its potential ability to identify and exploit software weaknesses.
It is a striking moment. AI companies are beginning to look less like ordinary technology vendors and more like strategic infrastructure providers. European officials, in turn, are acting less like privacy regulators and more like inspectors of potentially dual-use systems.
But beneath the political theater is a more uncomfortable reality: Europe’s most important cyber defense may not be access to a frontier model at all. It may be whether member states can finish implementing the laws they already passed.
The New Fight Is About Access
The immediate dispute is straightforward. European officials want visibility into advanced AI systems that may be capable of finding exploitable vulnerabilities faster than human teams. OpenAI has moved to position itself as cooperative, reportedly offering European authorities access to a new model designed to help identify gaps in cyber defenses.
Anthropic has been more cautious. Its Claude Mythos model has become a flashpoint in Brussels, with officials pressing for access and warning that formal enforcement powers under the EU AI Act begin in August 2026.
That date matters. Once the AI Office’s enforcement powers activate, the relationship between frontier AI companies and European regulators changes. What is now voluntary cooperation becomes something closer to supervised access, especially for companies that want to operate in the European market.
The politics are obvious. OpenAI can present itself as the responsible partner. Anthropic can argue that careless access to frontier cyber capabilities could create its own risks. Brussels can claim it is preparing for a new class of threat before it becomes unmanageable.
All three positions contain some truth. None of them fully addresses the larger vulnerability.
Frontier Models Are the Flashy Problem
Advanced cyber models create a difficult governance question because they can be useful and dangerous for the same reason.
A model that helps a government agency or utility find vulnerabilities before an attacker does may strengthen public security. The same class of model, in the wrong hands, could lower the cost of offensive cyber activity. It may accelerate vulnerability discovery, automate reconnaissance, or help less sophisticated actors chain weaknesses together.
That dual-use problem is why regulators are focused on access. They want to know whether these systems are merely better security tools or whether they represent a meaningful shift in offensive cyber capability.
The concern is not irrational. But it is also politically convenient. Frontier AI gives policymakers a visible target: a model, a company, a hearing, a deadline, a demand for access.
By contrast, the less glamorous work of cyber resilience is slower and harder to sell. It requires procurement discipline, patch management, incident reporting, software accountability, supply-chain controls, and national implementation of existing rules.
That is where Europe’s real test is unfolding.
NIS2 Is Supposed to Be the Backbone
The NIS2 Directive was designed to raise cybersecurity standards across critical and important sectors, including energy, transport, banking, health care, digital infrastructure, public administration, and certain technology providers.
Its purpose is not dramatic. It does not depend on secret access to a frontier AI model. It requires organizations to manage cyber risk, report serious incidents, secure supply chains, and make leadership accountable for cybersecurity governance.
That is precisely why it matters.
If Europe wants to withstand AI-accelerated cyber threats, the first line of defense is not a regulator inspecting a model in Brussels. It is whether hospitals, utilities, ports, cloud providers, manufacturers, and public agencies have implemented the baseline controls that make exploitation harder in the first place.
The uncomfortable issue is that implementation has been uneven. Several member states have moved faster than others, but the EU’s cybersecurity posture is only as strong as its weakest national transposition and enforcement regime.
In other words, Europe can demand access to Claude Mythos or OpenAI’s latest cyber model, but that does not automatically patch a municipal water system, harden a hospital network, or secure a supplier’s vulnerable software dependency.
The Cyber Resilience Act May Matter More Than the Hearing Room
The Cyber Resilience Act attacks the problem from a different angle: the security of connected products themselves.
Its premise is simple. If Europe is filling homes, factories, vehicles, hospitals, and public infrastructure with connected devices, those products should be built with security obligations from the start.
That includes security-by-design expectations, vulnerability handling, software update obligations, and accountability for manufacturers placing connected products on the EU market.
This is less dramatic than a fight over frontier AI access. It is also more structurally important.
AI-enabled attackers do not need science fiction capabilities if ordinary software remains poorly maintained, poorly configured, and poorly monitored. The easiest breach is still often the old vulnerability, the exposed credential, the unmanaged device, or the vendor integration no one owns internally.
The Cyber Resilience Act is an attempt to move responsibility upstream. Rather than forcing every customer to discover that a connected product is insecure after deployment, the law aims to make security part of market access.
That is the kind of defense that can scale. Model access cannot.
The Political Risk: Confusing Visibility With Preparedness
There is a danger that governments will mistake access for control.
Being shown a frontier model does not mean a regulator can meaningfully evaluate every downstream cyber risk. It does not guarantee that the model cannot be replicated, fine-tuned, misused, stolen, or approximated by competitors. It does not ensure that European infrastructure is ready for faster vulnerability discovery.
Access may help. It may allow regulators to understand capabilities, test safeguards, and establish trust with providers. But it is not a substitute for operational resilience.
The real question is whether Europe can turn AI anxiety into cyber execution.
That means finishing NIS2 implementation. It means enforcing the Cyber Resilience Act with seriousness. It means funding national cyber agencies. It means improving incident response. It means ensuring public-sector systems are not running years behind private-sector threats.
The frontier model debate is important. But it is not the whole battlefield.
OpenAI’s Regulatory Diplomacy
OpenAI’s move also deserves scrutiny.
Offering access to European officials is not only a safety gesture. It is regulatory diplomacy. It positions the company as cooperative at a time when European policymakers are deciding how aggressively to supervise general-purpose AI systems.
That posture has competitive value. If one frontier AI provider appears more willing to cooperate with Brussels than another, it may gain trust with governments, public agencies, and regulated industries.
In this sense, model access becomes more than a safety issue. It becomes a market signal.
The companies that can convince governments they are manageable may have an advantage over companies viewed as opaque, evasive, or too powerful to supervise.
The Real Lesson
The debate over OpenAI, Anthropic, and cyber-capable frontier models is a preview of the next phase of AI governance.
Governments will increasingly demand access, audits, testing rights, and assurances from companies building systems with strategic capabilities. AI firms will increasingly negotiate those demands as part of market access. The boundary between product safety, cybersecurity, national security, and industrial policy will continue to blur.
But Europe should be careful not to let the spectacle of frontier AI obscure the basics.
The most advanced model in the world is dangerous in part because so much ordinary infrastructure remains vulnerable. AI may accelerate the attacker. Weak cyber hygiene gives the attacker somewhere to go.
NIS2 and the Cyber Resilience Act are not as dramatic as a showdown with Silicon Valley. They are harder to explain, harder to implement, and harder to turn into headlines.
They are also the real defense.