Why we can't just auto-fix all our vulnerabilities away, yet - Maze

Back to Blog
August 1, 2025 Security Automation

Why we can't just auto-fix all our vulnerabilities away, yet

HW

HARRY WETHERALD

We're not ready to auto-fix everything, at least not yet.

In case you haven’t heard. AI is getting pretty good at software engineering (see Cursor, Devin, Lovable etc.). AI’s ability to write code and automate technical work has quickly caught the attention of folks in security. The biggest challenge many security teams have today is dealing with vulnerabilities. Cloud and code vulnerabilities require engineering work to fix and, whilst vulnerability backlogs keep getting bigger, engineers are under more pressure than ever to deliver features rather than fix vulnerabilities. For many security teams this creates a nightmare scenario where backlogs spiral out of control and tensions rise. Naturally many are wondering how AI can help.

AI is already helping engineers fix vulnerabilities (e.g. Github Auto-fix). This is leading some to believe that it could be a solved problem. A question I get asked all the time is: “why would we bother to triage or prioritize our vulnerabilities, can’t AI just auto-fix them all?”. There is an assumption from some that we can simply auto-fix everything, and make vulnerabilities go away.

The idea of auto-fixing everything sounds good at first, but it’s not the right way to look at the problem. At least not yet. Before we can auto-fix, we need to drastically change how we understand our vulnerability backlogs.

The problem is that the majority of vulnerabilities are false positives. Scanners tell you that you might be vulnerable, but when you investigate the finding in the context of your environment, there is literally zero risk of it ever being exploited. Ask someone who works with vulnerabilities how many findings are false positives and you’ll get answers ranging from 80% to 99.99%.

If the majority of our vulnerabilities are false-positives, why would we try to auto-fix them all? Every change we make to our cloud or our code comes with risk. It can lead to downtime or introduce new security risks. Even if auto-fix is 99.9% accurate, if almost all of our findings are false positives and we have hundreds of thousands or millions of them, then we’re introducing a ridiculous amount of unnecessary risk into our environment.

So the natural next question is to say okay, AI isn’t quite ready to take the wheel yet, so let’s create AI generated fixes for engineers to push manually. This is better, but still has two problems if used on it’s own. First, this approach still creates work for engineers. They end up with a wall of auto-fixes to review rather than a wall of findings to review. If the findings are >90% false positives, we’re still mostly wasting time. Second, if there are so many false positives, the findings must lack a lot of organisational context. If the findings lack context, the auto-fix suggestions are going to lack context too, and will often provide the wrong or unhelpful advice.

I do think we’re heading towards a future where remediation of vulnerabilities is completely automated. It’s realistic to think that in a few years we’ll have ‘self-healing’ environments. But, we can’t jump all the way to the end. we need to work towards that world step-by-step.

The first step is to radically improve our understanding of vulnerabilities. We have to be able to identify which vulnerabilities actually matter in the context of our environment, and stop wasting so much time trying to remediating false positives. Once we’re able to see which vulnerabilities matter, we can then use AI to provide auto-fix suggestions for humans to execute. Then, once we can see where auto-fix is happening 100% reliably, we can start moving towards fully automated remediation.

AI should totally revolutionise the way vulnerabilities are managed. The future we should all be shooting for is one where dealing with vulnerabilities, misconfigurations, and similar issues in our cloud and code is almost totally abstracted away. AI Agents should be able to find issues, investigate them in context, and fix them without our involvement. But, we can’t get there in one leap. We have to fix our broken backlogs before we can see the full potential of AI-generated fixes.