Anthropic is widening access to an unreleased AI model for cybersecurity, pulling in a roll call of Big Tech and security vendors as it tries to prove that powerful AI can defend systems as well as probe them. The company says the effort, called Project Glasswing, will let selected partners test ”Claude Mythos Preview” for defensive work, while Anthropic shares the findings more broadly.
The partner list reads like a who’s who of the enterprise AI race: Amazon.com, Microsoft, Apple, Google, Nvidia, CrowdStrike, and Palo Alto Networks are all involved. That mix is telling. Cloud and chip giants want a seat at the table, while security vendors have every reason to shape how a model like this is used before it becomes another tool in attackers’ hands.
What Project Glasswing gives partners
Anthropic said Mythos Preview found ”thousands” of major vulnerabilities across operating systems, web browsers, and other software. The company is also expanding access to about 40 additional organisations tied to critical software infrastructure, and it is committing up to $100 million in usage credits, plus $4 million in donations to open-source security groups.
That kind of grant-making is more than goodwill. It is a way to seed adoption, collect feedback, and frame Anthropic as the responsible AI player in a market where every vendor claims to be the adult in the room. OpenAI and Google have both been pushing security-oriented AI tooling too, so the pitch here is not just model quality – it is trust, access, and who gets to define ”safe” first.
Why the cybersecurity crowd is paying attention
The timing is no accident. This year’s RSA cybersecurity conference in San Francisco was already dominated by questions about AI-powered attacks and whether conventional defenses still hold up. Anthropic is leaning into that anxiety by saying its eventual goal is for users to safely deploy Mythos-class models at scale.
It is also trying to answer an awkward question hanging over the industry: if AI can spot holes faster, can it also be turned against defenders just as quickly? Anthropic is clearly betting that the best way to avoid that outcome is controlled access, not secrecy. That is a sensible line, even if it also doubles as a very polished sales pitch.
- Model: Claude Mythos Preview
- Program: Project Glasswing
- Additional organisations: about 40
- Funding: up to $100 million in usage credits
- Open-source support: $4 million in donations
Anthropic’s AI cybersecurity model and the government angle
Anthropic also said it has been in ongoing discussions with the U.S. government about the model’s capabilities. That matters because cyber AI is headed toward regulation whether vendors like it or not, especially after Anthropic said hackers exploited vulnerabilities in its Claude AI to attack around 30 global organisations last year.
Meanwhile, the numbers from IBM and Palo Alto Networks help explain why the market is moving so fast: 67% of 1,000 executives surveyed said they had been targeted by AI attacks within the past year. Expect more of this. The real contest now is not whether AI ends up in cybersecurity; it is whether the companies building it can keep one step ahead of the people trying to abuse it.

