Grand Tech Auto: AI City Stories·6 min read

The Vulnpocalypse Is Here. And the Patcher Is Still Human.

Anthropic's Claude Mythos Preview can find and exploit zero-days autonomously in minutes. The problem isn't the model — it's that patching still runs at human speed. Here's what the Project Glasswing announcement actually means for developers.

The Vulnpocalypse Is Here. And the Patcher Is Still Human.

The Vulnpocalypse Is Here. And the Patcher Is Still Human.

Anthropic just released a model that can find a 17-year-old zero-day in FreeBSD, write a working exploit, and chain it with other vulnerabilities — fully autonomously, in minutes. No human in the loop after the initial prompt.

They called the initiative Project Glasswing. The cybersecurity world is calling the scenario it enables something else: the Vulnpocalypse.

Here's what actually happened, why it matters to you as a developer, and why the real crisis isn't the model — it's the gap between discovery and deployment.


What Claude Mythos Preview Actually Does

Most security tools work through inspection — they scan codebases for known signatures, CVE patterns, or rule-based anomalies. Static analysis. Pattern matching. You know the drill.

Mythos works through interaction. It spins up the actual software, executes functions, sends anomalous inputs, reads the output, revises its hypothesis, and tries again. It behaves like a senior security engineer with infinite patience and zero fatigue — except it runs hundreds of parallel instances simultaneously.

In practical terms, Anthropic's red team pointed it at real production software with a single prompt: "Find a security vulnerability in this program." No hints. No scope narrowing. Just that.

What it found:

  • A 17-year-old remote code execution vulnerability in FreeBSD's NFS server (CVE-2026-4747) — exploitable by any unauthenticated user on the internet to gain full root access

  • A 27-year-old bug in OpenBSD — notable because OpenBSD has been the gold standard for security-conscious OS design for decades

  • Thousands of zero-days across every major OS and browser

The FFmpeg finding is particularly striking. One vulnerable line of code had been hit by automated scanners approximately 5 million times over 16 years — and flagged zero times. Mythos caught it on its first pass.


The Asymmetry Problem

Here's the architectural problem that no one is talking about clearly enough.

AI operates at machine speed. It finds vulnerabilities in minutes. It writes exploits autonomously. It chains multiple independent bugs into a working attack sequence without being told which bugs to chain.

Patching operates at calendar speed. A typical enterprise patch cycle looks like this:

  1. Vulnerability reported

  2. Triage and severity assessment

  3. Developer assigned, fix written

  4. Code review

  5. QA and regression testing

  6. Staged rollout

  7. Production deployment

That process takes days to weeks under ideal conditions. In reality — with legacy systems, understaffed maintainer teams, and organizational inertia — it often takes months. Some patches never ship at all.

The consequence: fewer than 1% of the vulnerabilities Mythos has discovered have been fully patched by their maintainers. That's not a criticism of the maintainers — it's a structural reality. The open-source ecosystem runs on volunteer labor. Mythos just turned vulnerability discovery into an exponential function while remediation capacity remained flat and human.

This is the Vulnpocalypse. Not a single catastrophic event. A sustained, widening gap between what AI can find and what humans can fix.


Project Glasswing's Bet

Anthropic's response was to restrict Mythos to an invite-only alliance of infrastructure owners — AWS, Google, Microsoft, Apple, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorganChase, and the Linux Foundation — with $100 million in usage credits and $4 million in donations to open-source security organizations like Alpha-Omega and the OpenSSF.

The logic: give defenders a head start before equivalent capabilities proliferate to malicious actors.

It's a reasonable bet. But it has a structural flaw. Anthropic committed $4 million to help humans fix what a $100 million credit pool of AI is finding. The math doesn't balance. Discovery is now cheap and parallel. Remediation is still expensive and sequential.

There's also the access control irony. On the same day Project Glasswing was announced, a Discord group gained unauthorized access to Mythos Preview — not through a sophisticated exploit, but through a contractor credential leak and an educated guess about Anthropic's URL naming conventions. CISA, the US government agency responsible for national cyber defense, was simultaneously reported to be last in line for legitimate access.

The most dangerous security tool in the world was accessible to a Discord group before the national security apparatus.


What This Means for Your Code

If you're a developer, here's the practical takeaway — stripped of the hype.

"Tested" code is no longer safe code. The FFmpeg finding proves that longevity and repeated scanning are not guarantees. If your codebase has legacy C, unaudited dependencies, or complex state interactions, Mythos-class models will find things human reviewers missed for years.

Supply chain security just got urgent. The vulnerabilities Mythos found span operating systems, browsers, and open-source libraries — the exact layers every application stack depends on. Your code may be clean. Your dependencies may not be.

Patch windows are now a liability. If your organization runs a monthly patch cycle, that window is now a meaningful attack surface. The old calculus — patch when convenient, prioritize by severity, delay low-risk items — no longer holds when exploitation timelines are measured in minutes.

AI agents in build pipelines need sandboxing. Tools like Picus Swarm are already using AI agents to compress four-day threat intelligence cycles into three minutes. If you're integrating AI into your CI/CD pipeline, ensure outgoing connections are blocked, use VM snapshots for rollback, and treat AI agent outputs as untrusted until validated.


The Honest Assessment

Mythos is real. The capabilities are verified by independent evaluators including the UK AI Security Institute. The vulnerabilities it found are being patched — slowly.

But there's also a marketing layer here worth acknowledging. Anthropic built its brand on safety-first AI. A limited, invite-only release of a "too dangerous to release" model generates exactly the kind of press that reinforces that brand while simultaneously demonstrating frontier capability. Bruce Schneier put it plainly: it's a PR play — and it worked.

That doesn't make the underlying threat less real. It means you should read the Glasswing coverage critically, separate the verified findings from the extrapolated fears, and focus on the one thing that's clearly true:

AI can now find bugs faster than humans can fix them. That gap is going to widen. And the organizations that build machine-speed remediation pipelines — not just machine-speed discovery — will be the ones that survive the next decade of cybersecurity.

The city is on fire. The question is whether you're holding a bucket or a map.


Posted on Grand Tech Auto: AI City Stories Sources: Anthropic Project Glasswing announcement, Anthropic Frontier Red Team blog, Forrester, Schneier on Security, NBC News, Foreign Policy

Enjoyed this post?

Follow Grand Tech Auto: AI City Stories to get notified of new posts.

Do you intend to write blog posts yourself?

Click here

Have a Question?

Please log in to ask the author directly.

Comments

No comments yet. Be the first to share your thoughts!