Software doesn't say no

« All posts

(Software doesn't say no)

Translated by ChatGPT-5.2 from the Hungarian version

I once watched a leader solemnly declare that "teams should be autonomous." It sounded great. Autonomy, freedom, faster progress. And then time passed, and it became clear that autonomy, in this interpretation, meant that nobody talked to anyone anymore. There was no coordination, no shared rules, no enforced decisions. Everyone did their own thing, and the system behaved exactly as you would expect: it fell apart.

What was blindingly obvious? That the operating model needed to be fixed, responsibilities clarified, boundaries made explicit. Instead, leadership came up with this: "Let’s introduce Kubernetes-it will solve it." And this is where the thinking goes off the rails. Not because these tools are inherently bad, but because they are being applied to the wrong problem.

Kubernetes, service meshes, and policy engines were not created to paper over organizational failure. They were responses to real, large-scale reliability and scaling problems-environments where the cost of coordination objectively exceeded what humans could manage (typically not the environments where they are most eagerly adopted). The problem is not the tool itself, but what happens when it is used instead of organizational decisions. When, instead of saying "this is not how we work," we say "the system will prevent it." And with that, we conveniently rid ourselves of conflict.

Microservice architecture fits neatly into this pattern. It is not inherently flawed, and there are genuinely well-functioning, loosely coupled systems out there. But they do not emerge by accident. They require deliberate domain decomposition, clear ownership, respect for Conway’s Law, and continuous human decision-making. When these are missing, microservices do not solve the problem-they obscure it. Instead of a monolith, we end up with thirty services that are just as tightly coupled, only now over HTTP, wrapped in retries and timeouts, and an order of magnitude slower. Complexity was never going away-we just relocated it, elegantly, and once again the user pays the price.

This is where the pattern becomes truly dangerous. When leadership cannot-or will not-make decisions, whether due to political pressure, bad incentives, legacy constraints, or simple lack of time, responsibility does not disappear. It moves. Straight into the software. Admission controllers, policy engines, guardrails appear to say "no" instead of people. Not because this is technologically elegant, but because it is organizationally convenient.

In this system, engineers are not merely victims. They are part of the dynamic. Often they ask for these tools, drive technological trends, chase more exciting problems and "cleaner" solutions. That is understandable. But the core truth remains: when there is no clear decision about what we are regulating and why, the tool does not make work easier-it smears responsibility.

And yes, software can provide feedback. Through metrics, SLOs, error budgets. But these are not decisions-they are symptoms. Software will not tell you that expectations are contradictory. It will not ask whether a deadline is meaningful. It will not mediate a political conflict between teams. It will merely signal-and if no one is willing to interpret those signals and own the consequences, they become alerts shouted into the void.

When everything collapses, the roles are almost predictable. The manager says, "the system behaved this way." Operations repeats the mantra: "it's by design." The developer shrugs: "it's non-deterministic." And the user just sees that it doesn’t work. The decision has no owner-it was lost in the technical fog. What used to be called leadership, organizational discipline, or simply backbone is now wrapped in YAML, policies, and workflows.

So when someone says, "let’s just introduce this tool and everything will be better," it may well be a powerful tool. But it may also exist precisely so no one has to say an uncomfortable truth out loud: coordination is a human task, responsibility is a human burden, and decisions must be made by people. Software can help with that-but it cannot replace it. It will not argue. It will not say no. It will simply execute what it is told.

And that is exactly why it is so dangerously convenient.

Software is an excellent tool. But a terrible alibi.