Vulnerability Déjà Vu: Why the Same Bug Keeps Coming Back - Maze

Back to Blog
January 5, 2026 Security

Vulnerability Déjà Vu: Why the Same Bug Keeps Coming Back

NL

NUNO LOPES

You patch a critical CVE. Your scanner goes green. You close the ticket and move on to the next fire. Then, six months later, you see headlines about the "same" vulnerability being actively exploited. Didn't we already fix that?

This happens more often than most security teams realize. And it's not that your team failed to patch. It's that patches themselves fail more often than we admit.

In traditional infrastructure, patching was straightforward: update the server, move on. Cloud environments have changed the math. Your infrastructure is now code AMIs, container images, auto-scaling groups. When a variant drops six months after you patched the original, your running instances might be fine. But what about the base AMI your auto-scaling group uses to spin up new instances? The container image in your registry? The Terraform module still referencing an old image ID?

In cloud environments, vulnerabilities don't just recur because vendors ship incomplete fixes. They recur because your infrastructure keeps rebuilding itself from sources that were never updated.

In the old world, you patched a server and it stayed patched. The vulnerability was gone until someone explicitly changed something. Cloud infrastructure works differently. Your "servers" are ephemeral spun up from AMIs, container images, and launch templates that live somewhere else. When you patched that critical CVE last quarter, you probably patched running instances. But did you update the AMI? The base image in ECR? The Dockerfile in your repo? The Terraform module your team copies for new deployments?

If not, every auto-scaling event, every new deployment, every disaster recovery failover is pulling from a source that still contains the original vulnerability. And when a variant or bypass drops six months later, you're now exposed to both the original bug your "patched" infrastructure keeps reintroducing, and the new variant you haven't addressed yet.

How often? Google's Threat Analysis Group found that over 40% of the zero-days exploited in the wild in 2022 were variants of previously reported vulnerabilities. More than 20% were variants of bugs that had been actively exploited and patched just the year before. Attackers came back within 12 months with a variant of the original bug.

This isn't a fringe problem. It's a structural one. And if your vulnerability management program treats every CVE as a one-time event, you're going to keep getting surprised when old bugs refuse to stay dead.

How Patching Is Supposed to Work

Most security programs treat patching as permanent. Scan, prioritize, remediate, done. Each CVE is an independent event: your scanner confirms the patched version is installed, your compliance report marks it resolved. But what if the same bug comes back months later wearing a different CVE number? What if the patch only blocked one exploitation path? What if a code refactor accidentally removed the fix entirely? 

These aren't hypotheticals.

Three Ways Patches Fail

Vulnerabilities recur in predictable patterns. Attackers study patches the moment they drop, looking for gaps in the fix or similar bugs nearby. Once you recognize the same patterns they're looking for, you stop playing catch-up.

Regressions: The bug you fixed gets unfixed

A vulnerability gets patched correctly. Years pass. Then someone refactors the code and accidentally removes the security fix. The developer has no idea they just reintroduced a vulnerability from 2006. Catching this requires either meticulous code comments explaining why a fix exists, or comparing commits against historical CVE patches, something most teams don't have tooling or processes for. Without that context, the fix looked like unnecessary complexity that could be safely removed.

Example: RegreSSHion (CVE-2024-6387)

In July 2024, researchers disclosed a signal handler race condition in OpenSSH enabling unauthenticated remote code execution. The bug was originally fixed in 2006 as CVE-2006-5051. In October 2020, a routine code refactor removed one preprocessor directive, silently reintroducing the vulnerability. By disclosure, the regression had been in production for nearly four years, affecting over 14 million instances.

The pattern matters more than exploitability. Even with technical barriers to exploitation (specific glibc versions, thousands of connection attempts needed), the organizational failure is clear: security fixes that predate current developers get removed because no one remembers what they protected against.

Incomplete fixes: The patch blocks the exploit, not the bug
Under pressure to ship fast, vendors often block the specific exploitation technique rather than addressing the underlying flaw. The researcher found one door. The patch locked that door. But the room has other doors.

Google's Project Zero has written about this pattern. Their assessment is blunt: "Vendors often release narrow patches, creating an opportunity for attackers to iterate and discover new variants."

Example: Windows SmartScreen Bypasses

In December 2022, Microsoft patched CVE-2022-44698, a SmartScreen bypass used to deliver Magniber ransomware. The fix addressed how SmartScreen handled files with malformed Authenticode signatures.

By January 2023 one month later Google TAG observed attackers exploiting CVE-2023-24880, a new SmartScreen bypass using a different malformed signature technique. The original patch was correct for the specific attack it addressed. It just didn't address the root cause: inadequate validation of untrusted files.

Variants: Same vulnerability class, new entry point

Sometimes the patch was correct for the specific code path it addressed, but the same type of flaw exists elsewhere in the codebase. A deserialization bug gets fixed in one endpoint; six months later, the same pattern appears in a different endpoint nobody checked.

Attackers think in vulnerability classes. Defenders who only track individual CVEs miss this entirely. The good news: CVEs are mapped to CWEs (Common Weakness Enumerations), which let you search for related bugs across your stack if you know how to look.

Example: Microsoft Exchange Proxy Family

ProxyLogon (CVE-2021-26855) exploited Exchange's Client Access Service, where header injection enabled SSRF to bypass authentication for pre-auth RCE. Microsoft patched it. Exploitation continued.

Months later, ProxyShell emerged (CVE-2021-34473CVE-2021-34523CVE-2021-31207) different bugs exploiting related assumptions about path normalization and request routing.

In 2022, ProxyNotShell arrived (CVE-2022-41040CVE-2022-41082), again exploiting trust boundary assumptions in Exchange's architecture.

Three generations of critical vulnerabilities, all actively exploited, all rooted in complex request routing logic that was never fundamentally simplified. Each patch addressed specific techniques without resolving the architectural patterns that kept producing exploitable paths.

What Defenders Should Do

You can't control how vendors patch. You can control how you respond.

Patching a CVE doesn't guarantee protection from that class of vulnerability. Attackers think in patterns and vulnerability classes, not individual CVEs. Here's how to do the same.

Match Your Response to the Failure Mode

For regressions: Don't trust patches on components with CVE history to stay fixed. Keep compensating controls active for network services like SSH, that means bastion hosts, IP allowlisting, or certificate-based auth. When major updates ship, check if changelogs mention refactoring in areas with prior CVEs.

For incomplete fixes: Assume bypasses are coming. WAF rules, network segmentation, disabled features keep them even after patching. Set alerts for "bypass" + recent CVE names in security news. When one drops, check if your existing mitigations cover the new technique. If a feature keeps getting bypassed (like SmartScreen edge cases), evaluate whether you can disable it entirely.

For variants: Search NVD for the same CWE plus your vendor. Example: you patch an SSRF (CWE-918) in Exchange - search "CWE-918 Microsoft Exchange" to find related bugs you might have missed.

Automate the Awareness

Nobody is manually reading changelogs and security mailing lists consistently. If you say you are, I don't believe you. The realistic approach is to let tools do the monitoring.

  • OpenCVE: subscribe to critical vendors/products. Set it once, get alerts when new CVEs drop.
  • Feed alerts into Slack or your ticketing system so they're visible where your team works.
  • Point AI agents at advisories and changelogs for watched components. Ask: "Is this related to previous CVEs? Does it mention bypass, variant, or regression?" Let the agent do the reading you won't.

Putting It Into Practice

Build visibility:

  • Generate SBOMs with Syft or Trivy
  • Track vulnerability history per component with OWASP Dependency-Track
  • Query historical CVE data on CVEDetails.com

The workflow:

  • Patch a high-profile CVE (CISA KEV, CVSS 9+, headlines) → add that component to your OpenCVE watch list
  • New CVE on watched component → check CWE match or "bypass"/"variant" mentions → prioritize regardless of CVSS
  • Monthly: review which components and CWEs keep recurring
  • Patterns emerge → escalate to defense-in-depth: restrict network exposure (reverse proxy, WAF hardening), add runtime protection, or start the replacement conversation

The goal isn't perfect patching. It's to stop being surprised when patches aren't perfect. When a component keeps generating CVEs, don't just patch faster, change your relationship with it.