Tag Archives: AI impact

Anthropic’s Project Glasswing Could Change Cybersecurity Forever

There are moments in tech when you read an announcement and immediately realise that something important has shifted.

That was very much my reaction when I came across Project Glasswing, a newly announced initiative from Anthropic that is aimed squarely at one of the biggest looming problems in modern computing: what happens when AI becomes exceptionally good at finding software vulnerabilities. Source

According to Anthropic, Project Glasswing brings together a heavyweight list of partners including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks, all with the goal of securing critical software for what Anthropic calls the AI era. It is also extending access to more than 40 additional organisations that build or maintain important software infrastructure. Source

Now, that alone would be interesting enough, but the real headline here is the model sitting behind it all.

Anthropic says its unreleased model, Claude Mythos Preview, has already demonstrated the ability to find and exploit software vulnerabilities at a level beyond all but the most skilled human experts. That is a huge claim, and if it holds up in practice, it means we may have crossed into a very different phase of cybersecurity. Source

In plain English, this is not just about a chatbot helping someone write a bit of code more quickly. This is about AI being able to inspect complex software, spot weaknesses that humans and automated tools have missed for years, and in some cases work out how those weaknesses could be exploited. Anthropic says the model has already found thousands of high-severity vulnerabilities, including flaws affecting major operating systems and web browsers. Source

Some of the examples are rather startling. Anthropic says Mythos Preview uncovered a 27-year-old vulnerability in OpenBSD, a 16-year-old flaw in FFmpeg, and even chained together several Linux kernel vulnerabilities in a way that could escalate ordinary user access into full control of a machine. The company says those issues have now been responsibly disclosed and patched. Source

That, to me, is the bit that really lands.

Because for years we have tended to think of cybersecurity in terms of patching known issues, following best practice, keeping software up to date and hoping the really serious flaws are found by the good people before the bad people. But if AI systems are now reaching the point where they can autonomously discover dangerous bugs in code that has survived decades of scrutiny, then the pace of both defence and attack could increase dramatically. Source

Anthropic is clearly trying to frame Glasswing as a defensive first move. The company says it is committing up to $100 million in usage credits for Mythos Preview and $4 million in direct donations to open-source security organisations. The idea seems to be to put these capabilities into the hands of defenders, infrastructure operators and maintainers before similar systems become more widely available. Source

And that is probably the most sensible angle here.

Because whether we like it or not, the genie is not going back in the bottle. If one frontier AI lab can build a model that is frighteningly good at vulnerability discovery, others will too. Eventually, those capabilities will spread further. The question is not really whether AI will reshape cybersecurity. It is whether defenders can get enough of a head start to stop things getting seriously messy. That is an inference from Anthropic’s announcement and the examples it gives, rather than a direct claim from the company, but it feels like the unavoidable conclusion. Source

For those of us who run websites, servers, ecommerce platforms, mail systems or anything else connected to the wider internet, this should be a bit of a wake-up call. The old approach of leaving systems half-maintained, delaying updates, or assuming that obscure software will somehow stay below the radar looks even more risky in a world where AI can inspect code at speed and scale.

Project Glasswing may turn out to be remembered as one of those early milestone moments, the point where the cybersecurity industry publicly acknowledged that AI is no longer just a helpful assistant for defenders. It is becoming a serious force multiplier, and one that could work for either side.

That makes this announcement both exciting and slightly chilling.

And, in true Gadget Man fashion, it is exactly the kind of development that reminds us technology is never just about shiny new tools. It is also about consequences, responsibility and how quickly the world has to adapt when the rules suddenly change.

Source

Anthropic, Project Glasswing: Securing critical software for the AI era