From Rules to Reasoning: The Shift That Made Maze Possible - Maze

Back to Resources
June 19, 2025 Product

From Rules to Reasoning: The Shift That Made Maze Possible

SC

SANTIAGO CASTIÑEIRA

Security teams are overwhelmed not because they lack findings, but because they lack context. For years, vulnerability management tools flooded teams with alerts, but offered little help understanding what actually matters.

Some tools tried to help by layering additional datapoints onto findings so filtering rules could be applied. A common one: “Is this asset public-facing?”. This is how most products still operate, you apply a set of filters to make the intractable number of scanner findings somewhat manageable.

The practice of using rules like these to prioritise vulnerabilities is now widespread. We’ve met many teams that aligned with their C-suites on a fixed set of filters—what they’d fix, what they’d ignore.

The problem is that rules can capture only a tiny fraction of the context needed to truly assess risk. They’re directionally helpful compared to raw CVSS scores, but fall far short of what an in-depth investigation would uncover.

We built Maze because we believe you shouldn’t have to choose between comprehensive coverage and deep insight. The set of findings that matter is constantly evolving and your tooling should adapt to that.

Why Maze Couldn’t Have Been Built Before

One of the most common ways to filter findings we’ve seen is some variation of the following: IF public facing AND critical asset AND CVSS > 8, THEN high risk. 

That’s still the state of the art in current vulnerability management products. But here’s the reality: over 90% of vulnerabilities are false positives when investigated in context. And, rules often fail to identify the edge cases where real breaches actually happen.

Before LLMs, the challenge wasn’t knowing we needed better logic, it was scaling it. Encoding nuanced logic across every OS, app stack, cloud provider, version, and misconfiguration simply wasn’t feasible. Building systems that could reason across all that context was out of reach.

Now, we can take every relevant signal (e.g. kernel versions, library paths, IAM roles, network configs) and reason over them dynamically, with a flexibility that was impossible just a few years ago.

LLMs Changed How We Investigate And Fix Vulnerabilities

LLMs changed the game by allowing us to reason across layers: infrastructure, runtime, configuration, and metadata. At Maze, we use LLMs to go far beyond “Is this asset public-facing?”. For every finding, our agents explore a deep decision tree that looks at prerequisites, mitigating factors, additional software in the context, and different layers of configuration. It’s not just about surfacing facts, it’s about interpreting them in context. LLMs let us navigate these branches dynamically and surface what actually matters.

Once we understand the problem clearly, we can also act. Remediation is no longer a game of static patches or suggesting major version upgrades that aren’t practical. With LLMs, we can generate actionable, context-aware suggestions across source code, infrastructure as code and cloud configurations. 

We don’t just tell you something’s wrong, we show you why it matters, and how to fix it.

Why Now: The Stack That Made This Possible

When we started Maze, the concept of an “AI agent” barely existed in practice. Most of what we did early on was teaching LLMs to choose tools and provide input parameters correctly. There were no frameworks, orchestration layers or routers, just raw prompting and experimentation.

We’ve grown alongside the ecosystem. The timing couldn’t have been better.

Inference providers like AWS Bedrock now provide the enterprise-grade building blocks we needed. This gave us the confidence that our customers’ data would always remain under tight control. Tools like batch inference, prompt caching and sufficiently large context windows unlocked reliability and cost performance that couldn’t have been achieved before.

Maze sits at the intersection of LLM maturity, scalable infrastructure, and a real need for signal over noise in cloud security.

From Dream to Reality

We couldn’t have built Maze five years ago. The technology just wasn’t there.

But now with the advent of LLMs and a mature ecosystem to support them, we can do what wasn’t possible before: provide deep contextual analysis at scale, with clear, actionable steps to reduce risk.

This isn’t about AI hype, it’s about finally having the tools to solve a long-standing, painful problem in security.

February 25, 2026 Product
AI Remediation Developers Actually Want to Use
Read more
January 20, 2026 Security
2025: The Year Vulnerabilities Broke Every Record
Read more
January 19, 2026 Product
Matt Johansen's First Look at Maze
Read more
January 15, 2026 Product
Maze Data Sheet
Read more
January 5, 2026 Security
Vulnerability Déjà Vu: Why the Same Bug Keeps Coming Back
Read more
December 29, 2025 Security
The Cross-Platform False Positive Problem: Why Vulnerability Scanners Flag Windows CVEs on Linux
Read more
December 22, 2025 Security
The Language Barrier: Why Security and Engineering Are Never Aligned
Read more
December 4, 2025 Product
An Analyst's Take on Maze: AI That Actually Moves the Needle on Vulnerability Management
Read more
November 27, 2025 Security
Checkbox Security - Compliance Driven Security is Bound to Fail
Read more
November 25, 2025 Security
The Hidden Problem With CVSS: The Same CVE Gets Different Scores
Read more
November 12, 2025 Product
Meet Maze: AI Agents That Bring Clarity to Vulnerability Chaos
Read more
October 22, 2025 Company
Maze Named a Cloud Security Segment Leader in the 2025 Latio Cloud Security Report
Read more
August 1, 2025 Security Automation
Why we can't just auto-fix all our vulnerabilities away, yet
Read more
June 26, 2025 Case Studies
AI Vulnerability Analysis in Action: CVE-2025-27363
Read more
June 12, 2025 Company
The Vulnerability Management Problem
Read more
June 10, 2025 Company
Launching Maze: AI Agents for Vulnerability Management
Read more