Anthropic Announces ‘Project Glasswing’ as New AI Model Triggers Cybersecurity Concerns
Anthropic has launched Project Glasswing, a global cybersecurity initiative with partners like AWS, Google, Microsoft, Apple, CrowdStrike, and Palo Alto Networks.
A powerful new AI model has pushed the tech world into urgent action. Anthropic has announced Project Glasswing, a global effort to protect critical software from the same kind of attacks its own AI can now simulate. The initiative brings together major players including Amazon Web Services, Google, Microsoft, Apple, along with cybersecurity firms like CrowdStrike and Palo Alto Networks. At the centre of this push is Claude Mythos Preview, an unreleased AI system that, according to the company, can find and even exploit software vulnerabilities at a level matching or exceeding most human experts.
A shift in how cyber threats work
The announcement signals a bigger shift in cybersecurity. Software systems that run banks, hospitals, and infrastructure have always had flaws, but finding them required time and expertise. That equation is now changing fast. Anthropic says its model has already uncovered thousands of serious vulnerabilities, including long-hidden bugs in widely used systems. Some of these flaws had gone unnoticed for years despite repeated testing, highlighting how AI is changing the scale and speed at which weaknesses can be discovered and potentially abused.
Industry rush to stay ahead of attackers
Project Glasswing is designed as a defensive move. Partner companies will use the AI to scan their own systems, test for weaknesses, and fix issues before attackers can exploit them. Anthropic is backing this with significant investment, including $100 million in usage credits and funding support for open-source security efforts. The idea is simple: if such powerful tools are inevitable, defenders need access before attackers do.
Concerns around control and transparency
The move also comes at a time when AI companies, including Anthropic, are facing growing scrutiny. Questions around how powerful these models really are, who gets access to them, and whether safeguards are strong enough are becoming harder to ignore. Critics have pointed out that while companies talk about safety, the pace of development is accelerating rapidly, sometimes ahead of regulation or oversight.
What privacy and security experts may question
Even as Glasswing focuses on defense, it raises uncomfortable questions. Experts may ask who exactly gets access to a system capable of finding critical flaws across software ecosystems, and what checks are in place to prevent misuse. There are also concerns about data exposure, since scanning real-world systems could involve sensitive or private information. Another key issue is oversight is whether an industry-led effort is enough, or if independent monitoring is needed as AI begins to reshape cybersecurity itself. The risk that similar capabilities could eventually fall into the wrong hands is also hard to ignore.
Published By : Priya Pathak
Published On: 10 April 2026 at 10:40 IST