Anthropic has just confirmed that its most advanced model, Claude Mythos Preview, autonomously identified thousands of zero-day vulnerabilities across every major operating system and browser in the world — including a 17-year-old remote code execution flaw in FreeBSD that had never previously been discovered. The company didn’t release the tool to the public. Instead, it launched Project Glasswing: a restricted initiative giving access only to a select group of corporate partners and open-source developers, with a clear mandate — fix the vulnerabilities before malicious actors develop similar capabilities.
This is the scenario the security industry had feared for years: an AI capable of operating as the world’s most skilled hacker. It has arrived.
The Model That Alarmed Its Own Creators
Claude Mythos Preview is Anthropic’s unreleased frontier model. According to the company, it outperforms all existing cybersecurity benchmarks to the point of “saturating” traditional metrics, making real-world evaluation the only meaningful measure of its capabilities.
In controlled experiments, Mythos Preview autonomously exploited a 17-year-old remote code execution vulnerability in FreeBSD. In the weeks that followed, the model was used internally to map zero-day vulnerabilities across virtually every major operating system and browser. The exact number hasn’t been disclosed, but Anthropic confirmed the figure runs into the “thousands.”
A zero-day vulnerability is a flaw unknown to the software vendor — meaning no patch exists, and any actor who discovers it holds a complete advantage over potential victims.
Project Glasswing: An Unprecedented Security Coalition
To address the paradox — a tool too powerful to release publicly, yet too urgent to sit unused — Anthropic launched Project Glasswing.
The initiative brings together some of the largest technology and security companies in the world as founding partners: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. All will have access to Claude Mythos Preview with a clear mandate: use the model to find and fix critical vulnerabilities in their systems before malicious actors get there first.
The logic is straightforward but sobering. If Anthropic managed to build a model with these offensive capabilities, other labs — including those operating without the same ethical safeguards — are months or a few years away from reaching the same point. Project Glasswing is, in essence, a defensive arms race: deploy the advanced model now, while only the defenders have it.
“By releasing this model initially to a limited group of critical industry partners and open-source developers, Anthropic aims to enable defenders to begin securing the most important systems before models with similar capabilities become broadly available,” the company stated in its official announcement.
What This Means for Organizations in Practice
The founding partners of Project Glasswing will gain access to a tool capable of automatically auditing their systems and flagging flaws before any incident occurs. Organizations outside that circle — which means most companies worldwide — remain dependent on a security ecosystem that hasn’t yet absorbed this new reality.
There is also a growing governance dimension. As AI regulation advances globally and data protection laws tighten, liability for security incidents facilitated by AI is moving closer to the legal spotlight. An organization that suffers a data breach originating from a zero-day vulnerability that AI tools could have detected will face serious challenges demonstrating adequate due diligence.
What Your Organization Should Do Now
Most organizations won’t have direct access to Claude Mythos Preview in the near term. But it is possible — and necessary — to begin preparing for an environment where offensive AI is a real and growing threat. Some concrete steps:
- Attack surface audit: map which legacy systems are not receiving regular patches. Older vulnerabilities are precisely the most likely targets for automated tools.
- Update security vendor contracts: require transparency about AI use in detection and incident response tools already deployed internally.
- Monitor CVEs and sector-relevant zero-days: tools like the MITRE CVE database and national CERT alerts are free and effective for tracking emerging threats.
- Revise AI governance policies: any AI model with access to critical systems needs clearly defined scope and limits, regardless of vendor.
Project Glasswing also signals a broader trend: corporate cybersecurity will need AI fluency — not just as a user, but as a manager of risks specific to the era of advanced language models.
The Tool That Can Both Destroy and Protect Everything
Anthropic has chosen — at least for now — to bet on the defensive side of this paradox. But the window is narrow. The company has already announced plans to release new safeguards alongside the upcoming Claude Opus model. What is implied is that Mythos Preview, with its currently restricted offensive capabilities, will eventually reach a broader audience — with stronger controls, one hopes.
For technology teams and business leaders worldwide, the message is clear: the window for proactive response is open, but it will not stay open forever.
This article was published on May 8, 2026. Follow Entercast to stay ahead of the next developments.