The Language Barrier: Why Security and Engineering Are Never Aligned - Maze

Back to Blog
December 22, 2025 Security

The Language Barrier: Why Security and Engineering Are Never Aligned

AA

AMMAR ALIM

Security opens the meeting with a slide full of red. Engineering shows up already late for their next planning session. The conversation starts with "Why haven't we looked at these yet?" and at that moment, the discussion is already lost. Ammar Alim has spent his career on both sides of this divide. His diagnosis: this isn't a conflict problem. It's a language problem. And it's a trust problem.

There is a familiar scene in many organizations.

Security opens the meeting with a list of tickets and a slide full of red. Engineering arrives already tired, already late for their next planning session. The conversation starts with questions like, “Why have you not looked at these yet?” and “What is the status of these vulnerabilities?”

At that moment, the discussion is already lost.

It is not because the people are bad or because either side does not care about security. It is because they are speaking different languages.

Security is thinking about threats, incidents, and risk. Engineering is thinking in features, uptime, and delivery commitments. Vulnerability management sits at the worst possible intersection of those worlds. Security needs things fixed. Engineering sees noise.

I have spent my career as both an engineer and a security leader. I have been on the receiving end of those tickets, and I have also been the person sending them. What I have learned is simple:

This is not a conflict problem. It is a language problem. And it is a trust problem.

When “critical” sounds like “panic”

When security tells an engineering team, “This is critical,” what do they actually hear?

Early in my career, I heard one thing: fear.

My heart rate would spike. I would drop everything and rush to investigate. Then I would discover that the vulnerable resource was sitting on an isolated subnet, unreachable from anywhere that mattered. Or the library version was technically vulnerable but blocked by mitigating controls. The label said “critical” but the real world conditions didn’t match.

After enough of those experiences, the emotion changes. You move from fear to skepticism.

Now, when someone says “This is critical,” my instinct is no longer to panic, it is to verify.

Is it on a public subnet?
Is it reachable from the internet?
Is there a working exploit path today?
Are there mitigating controls already in place?

And about 15 other steps to make sure this is actually real.

For me, a truly critical vulnerability is one where all the conditions are present for a data breach to happen today. Reachable, exploitable, relevant to our environment, and in line with known attack vectors. 

If you label everything critical, nothing is.

This is how trust capital gets burned. A security team sends eight urgent alerts, engineering investigates, and seven of them turn out to be non-issues in reality. The next time a ticket shows up, no one is in a hurry. Not because they do not care, but because their lived experience taught them that the label and the reality are disconnected.

Once that trust is gone, it is very hard to get back.

We do not understand each other’s work

It is easy to fall into simple stories.

From the security side, the story often sounds like: “Engineering doesn’t want to do security.”

From the engineering side, the story often sounds like: “Security has no idea what we actually do.”

Both are misconceptions.

The biggest misconception security has about engineering is that engineers do not care about security, but in reality, most engineers care deeply. They care about shipping reliable, resilient systems that don’t wake them up at two in the morning. They care about protecting users and the company. What they do not care about is chasing noise that does not contribute to their goals or the business.

The biggest misconception engineers have about security is that security doesn’t understand what they do and is not interested in learning. Too often, that perception is earned. I have seen vulnerability analysts walk into a conversation with no understanding of the continuous delivery pipeline, no idea how the release train works, and no curiosity about how the team ships software.

And this is where empathy is needed. 

Not the, “I feel bad that you are busy…” kind of empathy. 

“What are you working on? How do you ship? What does your release cycle look like? What tools do you live in every day? Where are you under pressure from your own leadership?” 

As a security leader, knowing these answers and asking these questions can earn a ton of goodwill from your engineering team. 

If you cannot answer a few of those basic questions about the teams you are asking to fix vulnerabilities, you have not earned the right to act surprised when they ignore your queue.

Vulnerability management became a volume problem

Look at how many organizations run vulnerability management today.

Tools are purchased by security, installed by a central team, and then pointed at production. Overnight, thousands of findings appear. The tool dutifully opens tickets into Jira or another system until the API hits rate limits (not really…not normally at least). Security engineers spend their days on vendor calls about why the integration is breaking instead of on remediation strategy.

The conversation becomes about volume instead of outcomes.

“We have this many open tickets.”
“Our backlog grew by this percentage.”
“We scanned this many assets.”

It is easy to confuse motion with progress. In many places, vulnerability management quietly turned into the work of managing volume rather than the work of actually making systems safer.

Underneath all of this are a few simple truths about security teams today:

Security is a false positive problem.
And security teams are data teams.

We are flooded with signals. Logs, alerts, scanner output, threat intelligence feeds, configuration baselines, compliance checks. If you do not have the capability to sift through large amounts of data, make sense of it, and prioritize what matters, you will lose. You will either drown in noise or outsource the problem to engineering by simply forwarding everything to them.

“Here is the output, you understand your system, you figure it out.”

That is not a partnership. That’s passing the buck.

Context and exploitability as a shared language

So what does a better language look like?

For me, exploitability is a central concept. Not “the exploit exists somewhere on the internet,” but a clear set of conditions:

  • Is the asset reachable from an attacker’s vantage point?
  • Are the right versions present and callable in practice?
  • Are there mitigating controls, such as network policies or authentication layers, that block real world exploitation?
  • Is the vulnerability widely known, or is awareness still low?

When all of those conditions are present, a vulnerability is not just theoretically dangerous. It is actively exploitable in the context of your environment. Then, we determine what severity actually looks like.

This is where security and engineering can meet.

Most engineers are very rational about prioritization. If you bring them a small number of findings with strong context, with clear evidence of exploitability in their stack, and with a clear path to fix, they will act. Especially if they can see how that work aligns with reliability, uptime, or customer trust.

What they will not spend their day doing is chasing alerts that may be technically accurate but practically irrelevant.

Why tools alone will not save you

It is tempting to see this as a tooling problem. If the scanner was better, if the platform was smarter, if the dashboards were clearer, alignment would appear.

Tools matter, but they should be the last thing you decide, not the first.

The first step is intention. What outcomes do you actually want? Faster patching of exploitable issues in internet facing services? Better coverage of critical compliance controls in regulated workloads? Fewer severity one incidents caused by drift?

The second step is context. Tickets should not be raw findings. They should be enriched artifacts that live in the language of the team receiving them. That includes:

  • Where in the architecture this issue lives
  • How it is reachable
  • Which service or team owns it
  • What “critical” means in this specific case
  • How the fix intersects with their current roadmap

Only then does it make sense to talk about platforms and automation, or talk about which platforms help fill the gaps in delivering those answers.

How AI can help bridge the gap

This is where I see a real opportunity with AI and agents.

Foundation models are trained on the public internet, not on your environment. They come with a broad understanding of code, infrastructure, and vulnerabilities, but no awareness of your reality. To be useful, they need context. That is where retrieval, embeddings, and structured data come in.

If you give an AI agent a clean, well structured view of your world, it can become a powerful translator between security and engineering.

Imagine a system that can ingest:

  • Scanner findings
  • Infrastructure as code
  • Configuration data
  • Network topology
  • Service ownership
  • Runtime telemetry
  • Ticket history

Now imagine it strips out the noise in those tickets that humans do not need to see. Assignee fields, sprint numbers, meta data that only confuses the model. What remains is deeply relevant, tied to a specific CVE, a specific tech stack, and a specific location in your environment.

With that, an agent can:

  • Cluster similar findings
  • Identify which ones are actually reachable and exploitable
  • Propose precise, environment aware fixes
  • And generate remediation plans in the language of the team that owns the service

In other words, it can do a large portion of the translation work that humans do slowly (and sometimes poorly) today.

As engineers, our limitation is the context window. We cannot hold every wiki page, every design document, every runbook, and every log line in our head while we work a single ticket. For an agent, that is normal. Many tasks that become “a full sprint” for a human can be reduced to minutes.

The key is not to let the agent spam engineers with more noise. You want to give the agent as much data as possible, then strictly limit what leaves the agent and reaches people.

Ownership, incentives, and the reset conversation

Who should own this problem?

The honest answer is that it depends on the organization. Some companies operate with strong central platform teams, some with fully autonomous product teams. In some places security is deeply hands on, in others it is almost entirely advisory.

That said, I have a clear view on accountability.

Every other function with specialized expertise in a company takes responsibility for its domain. Human resources does not say “People are everyone’s problem.” Legal does not say “Compliance is everyone’s problem.” They may partner with others, but accountability is clear.

In security, we often say “Security is everyone’s responsibility” and then distribute the work almost entirely to developers who were never told that in their job description. They join to build products and suddenly discover a large vulnerability backlog waiting for them.

I believe security should be accountable.

That does not mean security teams fix every bug. It means they own the quality of the signal, the strategy, the process design, and the relationship. They are responsible for making security an enabler rather than a tax. They are responsible for making the secure path the easy path.

If you asked me to fix this problem in your organization, my first step would be a reset conversation with engineering leadership. That conversation would sound something like:

Whatever we’ve done so far has not worked. Let’s acknowledge that together. From now on, you should not receive walls of tickets without context. You should not be asked to drop your entire roadmap for issues that have no clear business impact. When we bring you work, it will be enriched, prioritized, and aligned with your reality. In return, we ask you to engage with us as partners, not as an escalation queue.

Security can’t if the company doesn't win. If you want more funding for security initiatives, help engineering move faster with confidence. If you want leadership to care about risk reduction, show how clear, contextual security work preserves revenue and brand.

What good looks like

I wish I could say I have seen this solved many times. I have not.

In fifteen years working across security and infrastructure, I can only think of a handful of places that truly got ahead of this problem. They did not get there from a position of comfort. They hit rock bottom. Their Jira instances were unusable, their backlogs were ignored, and both security and engineering had given up on the existing models.

Their programs needed a hard reset. 

They rebuilt cloud accounts as code, with security baked in by default. They invested heavily in solving the false positive problem, not just buying more tools. They put significant effort into cultural change, communication, and process design.

It was not easy. It required years of investment and strong alignment between leadership in security, engineering, and product. But the outcome was very different from the starting point.

New accounts came online already aligned with best practices. Critical alerts were rare, well understood, and acted on. Security conversations in engineering meetings were about tradeoffs and design, not ticket counts.

Five years from now

If we get this right over the next five years, I think we will see three important changes.

First, more secure software (obviously). Not because we found more vulnerabilities, but because we prevented more issues through better architecture, better defaults, and smarter automation.

Second, less burnout. Engineers and security analysts will spend less time chasing noise and more time on work that actually matters. AI agents will handle much of the tedious translation work, the paperwork, the repetitive triage that currently fills so much of our day.

Third, a healthier relationship between security and engineering. One where “this is critical” means the same thing to both sides. One where security teams are seen as partners who understand the reality of building and shipping software. One where engineers can say, “Security makes my job easier,” and mean it.

Underneath all the tools, all the AI, and all the frameworks, this is still a human problem.

We wake up thinking about our own pressures and our own success. That is natural. But the more we understand each other’s work, constraints, and goals, the easier it becomes to design systems and processes that work for everyone.

If we can teach security and engineering to speak a shared language, alignment, and more importantly progress, will follow.

 

This article is a guest contribution from Ammar Alim.

If you’d like to share your perspective or submit your own story, reach out to us through our contact form. We’re always looking for sharp voices with insights to share.