In one of the most consequential AI announcements in recent memory, Anthropic has unveiled Claude Mythos Preview — its most powerful AI model to date — as the centerpiece of a sweeping new cybersecurity initiative called Project Glasswing. The model, which was quietly leaked months ago under the codename “Capybara,” is not being released to the public. Instead, it has been handed to a carefully selected group of technology titans and security firms for strictly defensive purposes. The announcement has sent shockwaves through the cybersecurity world, raising both hope and alarm in equal measure.

What Is Project Glasswing?
Project Glasswing brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software.
Anthropic is committing up to $100 million in usage credits for Claude Mythos Preview across these efforts, as well as $4 million in direct donations to open-source security organizations. The model’s limited debut is part of this new security initiative, in which 12 partner organizations will deploy the model for “defensive security work” and to secure critical software.
The initiative’s name carries symbolic weight. Employees at Anthropic decided on the name Project Glasswing — a metaphor that likens a transparent butterfly to software vulnerabilities, which are “relatively invisible.”
Anthropic does not plan to make Claude Mythos Preview generally available, but its eventual goal is to enable users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring.
What Makes Mythos So Powerful — and So Dangerous?
Mythos Preview is “extremely autonomous” and has sophisticated reasoning capabilities that give it the skills of an advanced security researcher. Mythos Preview can find “tens of thousands of vulnerabilities” that even the most advanced bug hunter would struggle to find.
No specialized cybersecurity training went into building Mythos Preview — the model’s ability to probe software for weaknesses is a byproduct of the same general advances in coding and reasoning that define it across other domains, meaning the attributes that help it fix flaws are inseparable from those that could be turned toward exploiting them.
In testing, Mythos Preview found bugs in “every major operating system and web browser,” including some that are believed to be decades old and weren’t detected by repeated human-run security tests. Mythos Preview successfully reproduced vulnerabilities and created proof-of-concepts to exploit them on the first attempt in 83.1% of cases.
Perhaps most alarming: Mythos Preview found several flaws in the Linux kernel, which is found in most of the world’s servers, and autonomously chained them together in a way that would let a hacker take complete control of any machine running Linux systems. In another test, Mythos Preview found a 27-year-old vulnerability in OpenBSD that would allow hackers to remotely crash any machine running it.
Cybersecurity: The Positive Impact
1. Unprecedented Vulnerability Discovery at Scale
Over the past few weeks, Anthropic has used Claude Mythos Preview to identify thousands of zero-day vulnerabilities — flaws that were previously unknown to the software’s developers — many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software. In some cases, these vulnerabilities have “survived decades of human review and millions of automated security tests.” This represents a paradigm shift in defensive security — finding and patching flaws that human researchers simply could not find.
2. Democratizing Elite Security Expertise
The same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software — and for producing new software with far fewer security bugs. Smaller organizations and open-source maintainers who cannot afford full security teams now have access, through Project Glasswing, to a resource that functions at the level of the world’s best security researchers.
3. Securing Open-Source Infrastructure
Anthropic is providing access to roughly 40 more organizations responsible for building or maintaining critical software infrastructure, allowing them to use the model to scan and secure both their own systems and open-source code. The open-source ecosystem — which underpins most of the world’s internet infrastructure — has long been chronically under-resourced in security. Mythos could change that dramatically.
4. Giving Defenders a Structural Advantage
Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity. For the first time, the tools available to defenders may be meaningfully more powerful than what most attackers can access — at least in the short term.
5. Transparency and Coordination with Government
Within 90 days, Anthropic plans to publish a public report on vulnerabilities found and patched, as well as recommendations for how security practices should evolve. Anthropic has been in ongoing discussions with U.S. government officials, including the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation, about the model’s capabilities.
Cybersecurity: The Negative Impact
1. The Dual-Use Dilemma Cannot Be Resolved
In a previously leaked memo, Anthropic warned the model could potentially pose a cybersecurity threat if weaponized by bad actors to find bugs and exploit them, rather than fix them. There is no technical way to make the model’s offensive and defensive capabilities separable — the same intelligence that patches vulnerabilities can be pointed at exploiting them.
2. Proliferation Is Inevitable
Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout — for economies, public safety, and national security — could be severe. Anthropic’s controlled release is a short-term measure. Competitors, both domestic and foreign, are racing toward similar capabilities.
3. Nation-State and Criminal Threat Amplification
Anthropic has already privately warned top government officials that Mythos makes large-scale cyberattacks significantly more likely this year. A model capable of finding and chaining Linux kernel vulnerabilities autonomously could be weaponized by hostile nation-states or sophisticated criminal groups to attack hospitals, power grids, and financial systems.
4. Existing Security Products May Become Obsolete
Following Fortune’s report about the model’s existence, shares in CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, Okta, Netskope, and Tenable all slumped between 5% and 11% as investors worried that increasingly capable AI models could undermine demand for traditional security products. If AI can autonomously detect and patch vulnerabilities, entire categories of the $200B+ cybersecurity industry face disruption.
5. Speed Asymmetry Favors Attackers
As CrowdStrike’s CTO warned, tasks that once demanded months of work “now happen in minutes with AI.” While defenders may use Mythos to patch systems, the same compression of time applies to attackers once they gain access to equivalent tools.
Data Privacy and Protection Positive Impact of Glasswing
1. Patching Vulnerabilities That Expose Personal Data
Many of the thousands of zero-days Mythos has already found are vulnerabilities in operating systems and browsers — the exact software layers through which the vast majority of personal data breaches occur. Patching these bugs at scale means billions of users’ data is better protected than it was before.
2. Strengthening the Financial Sector’s Defenses
Claude Mythos has already been used to find serious security flaws in every major operating system and web browser. JPMorganChase’s inclusion as a partner signals that Mythos is being applied to financial infrastructure — protecting the sensitive financial data of millions of consumers.
3. Open-Source Security = Stronger Privacy Foundations
Much of the software that handles encrypted communications, authentication, and personal data management is open source. Anthropic is committing $2.5 million to Alpha-Omega and the Open Source Security Foundation through the Linux Foundation, and $1.5 million to the Apache Software Foundation. Strengthening these foundations directly strengthens privacy protections globally.
Data Privacy Negative Implications of Glasswing
1. The Model Itself Is a Privacy Risk
Mythos is an extraordinarily capable reasoning and coding model. If access were ever to be misused — by an insider at one of the 40+ partner organizations, or through a breach — the model could be directed toward identifying vulnerabilities in systems that store sensitive health, financial, or personal data.
2. Anthropic’s Own Data Exposure History Is a Warning Sign
The existence of the model had already surfaced publicly after internal draft materials turned up in an unsecured location on Anthropic’s servers; the company traced the exposure to a misconfiguration in a third-party content management tool. The irony is stark: the company building the world’s most powerful cybersecurity AI suffered a data exposure of its own. This raises legitimate questions about the security of Mythos-related data and research.
3. Regulatory and Legal Frameworks Are Not Ready
Mythos operates in a space that existing data protection laws — GDPR, CCPA, HIPAA — were never designed to govern. When an AI model autonomously scans software systems for vulnerabilities, it may incidentally access or process personal data stored within those systems. The legal framework for what constitutes lawful processing in this context remains deeply unclear.
4. Government Coordination Could Lead to Surveillance Creep
Anthropic has been in ongoing discussions with federal officials about the use of Mythos. While framed in terms of national security and defense, the involvement of government agencies in deploying a model capable of autonomously probing software systems raises civil liberties concerns about the potential expansion of state surveillance capabilities under the guise of cybersecurity.
Mythos Project Glasswing Defensive Posture
Anthropic’s Claude Mythos Preview and Project Glasswing represent a genuine inflection point — a moment where AI has crossed a threshold from being a useful tool for security professionals into something that can autonomously outperform the world’s best human hackers. As CEO Dario Amodei put it: “The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.”
The project is, by Anthropic’s own admission, an urgent attempt to put these capabilities to work for defensive purposes before they inevitably escape into less responsible hands. Whether this window of advantage can be maintained — and whether the governance structures needed to manage it responsibly can be built fast enough — may be one of the defining technological policy questions of the next decade.