Big Tech and industry leaders unite with Anthropic on AI cyber defense push

Coalition with major tech and infrastructure players focuses on using frontier AI to detect and fix vulnerabilities before attackers exploit them.

Anthropic and global technology partners launch Project Glasswing, using AI models to identify and fix critical software vulnerabilities across cloud and open-source systems.

Anthropic has launched Project Glasswing, a cross-industry initiative bringing together companies including AWS, Apple, Google, Microsoft, NVIDIA, and Cisco to secure critical software systems using advanced AI models.

The move follows internal testing of a new frontier model, Claude Mythos Preview, which has already identified thousands of previously unknown vulnerabilities across widely used software.

The initiative signals a shift in how AI is being applied to cybersecurity, with models now capable of matching or exceeding human expertise in identifying and exploiting software flaws. For EdTech and workforce development, it highlights growing demand for advanced AI and security skills as these capabilities move closer to real-world deployment.

Frontier AI model reveals scale of vulnerability risk

Anthropic’s announcement is built around Claude Mythos Preview, a limited-release model designed for advanced coding and reasoning tasks. According to the company, the model has identified high-severity vulnerabilities in major operating systems, web browsers, and widely used software components, including issues that had remained undetected for years.

In a LinkedIn post, Anthropic President Daniela Amodei said the model “has so far found thousands of zero-day vulnerabilities — including some that survived decades of human review — spanning every major operating system and browser.”

Anthropic also states that many of these vulnerabilities were identified autonomously, without human guidance, pointing to a rapid increase in the effectiveness of AI-driven code analysis.

The company warns that such capabilities are likely to become more widely available in the near term. Amodei noted in the same post that “AI cyber capabilities at this level will proliferate over the coming months, and not every actor who gets access to them will be focused on defense.”

Industry coalition focuses on defensive use of AI

Project Glasswing brings together a group of major technology companies, financial institutions, and open-source organizations to apply these capabilities in a defensive context. Partners will use the model to scan their own systems and shared infrastructure, including open-source software that underpins large parts of the global technology stack.

Anthropic states that more than 40 additional organizations responsible for maintaining critical infrastructure are also participating, extending the reach of the initiative beyond its core partners.

The company is committing up to $100 million in model usage credits and $4 million in funding for open-source security organizations to support this work. Findings from the initiative will be shared with the wider industry, with the aim of improving defensive practices as AI capabilities evolve.

The initiative also includes collaboration with government stakeholders, reflecting the national security implications of AI-driven cyber capabilities.

From human-limited security to AI-scale detection

Anthropic frames the development as a shift in the economics of cybersecurity. Traditionally, identifying and exploiting vulnerabilities has required specialist expertise and significant time investment. The emergence of models such as Mythos Preview reduces those barriers, enabling faster and more scalable detection of software flaws.

The company highlights examples where the model identified long-standing vulnerabilities, including issues in operating systems, media processing libraries, and core infrastructure software. In some cases, vulnerabilities had persisted despite extensive testing and review.

The dual-use nature of these capabilities remains a central concern. Anthropic notes that the same systems that can be used to strengthen defenses could also be used to accelerate cyberattacks, increasing both the frequency and sophistication of threats.

Amodei said in her LinkedIn post that “that’s the gap Glasswing is built to close,” positioning the initiative as an effort to ensure defensive actors can keep pace with rapidly advancing AI capabilities.

Implications for skills and AI adoption

For EdTech and workforce development, Project Glasswing points to a growing need for specialized skills in AI, cybersecurity, and software engineering. As AI systems become more embedded in critical infrastructure, the ability to understand and manage these tools is likely to become a priority across both public and private sectors.

The initiative also reflects a broader shift toward collaborative models of AI deployment, where companies, researchers, and governments work together to address shared risks.

Anthropic indicates that the work will continue over the coming months, with plans to publish findings and recommendations on how security practices should evolve in response to AI-driven threats.

Previous
Previous

Microsoft and Publicis expand partnership to push agentic AI into marketing workflows

Next
Next

Google DeepMind opens student researcher roles to support AI-driven cancer research