Your Agent Stack Is the New Clean Code
I have this weird feeling of deja vu.

If you know me, you know I think dogma is one of the biggest curses in software engineering. Dogma is what happens when a practice gets treated as universal truth before context, evidence, and tradeoffs are considered.
Dogma is what made OOP feel like the answer to everything. Dogma is what fueled the whole clean-code-industrial-complex. I lived through that as a junior developer. I was taught from university to my early jobs: follow these rules because they are “best practices.” Then you apply critical thinking and realize many of them do not hold in real projects.
Not every time. Not for everything.
Hey, let’s be clear. All these things that get popular actually have good points in them. As much as I’ve pushed back against OOP and dumb rules, some things do make sense. But developers love fashion as much as anyone waiting all year for Paris Fashion Week. It’s just that our fashion makes even less sense. Thankfully I had great mentors that fueled my critical thinking, and I followed people online who were very vocal about it. But the industry didn’t stop at those old programming concepts. Oh no, no! 🤪 They also pushed microservices on everybody until everybody realized they made no sense unless you were one of the few companies that actually needed them.
But I’m not here to rehash the past. I’m here to tell you we are doing it again.
Now we are seeing a new version in the age of AI. In an era of exploration, we already see people trying to push their half-baked dogma just to sell you their books (well, online courses nowadays).
Overcomplicated and useless agentic workflows. Made-up standards that are not proven. Just another thing that this revolution doesn’t need.
And again, some ideas in every wave are genuinely useful. The problem is not the tools, or the core concepts. The problem is turning preferences and unproven opinions into doctrine.
I keep seeing teams copy-pasting agent workflows before proving they improve delivery, reliability, or product outcomes. Without even learning the basics first.
MCP is a perfect mini-case of this pendulum:
- First, it was treated as inevitable for everything.
- Then came backlash, and teams rediscovered that raw CLIs are often simpler to debug.
- Now it is rising again in a more pragmatic form.
Hype, backlash, partial recovery. New acronym, same cycle.
And look, it’s true that MCP solves a real problem in certain scenarios. But 99% of the time it’s not needed. And those who applied critical thinking, who knew the fundamentals of how this tech works, understood the hype wasn’t worth it. Be one of those. Don’t be the sheep that follows the shepherd blindly.
And if you don’t want to think for yourself and you need a guiding hand, look, even Anthropic, the fan favorite for all the vibe engineers, recommends starting simple and only adding complexity when needed.
Earn Your Place
Every practice needs to prove its value. If it can’t, get rid of it.
If a workflow adds setup cost, cognitive load, or debugging pain, it must pay that back with real outcomes in your context. Not in theory. Not in someone else’s codebase.
Take small functions, one of the most repeated dogmas in our industry. Functions must be small, do one thing. No proof. No study showing smaller functions actually correlate with fewer bugs or better maintainability. Just repetition until it became law.
What actually happens? A dev breaks logic into five tiny named functions. Another dev comes along, sees a function name that looks like what they need, and reuses it. Months later, someone needs to tweak the behavior. Of course they do, because the reusability was born in imagination, not reality. They change that shared function. Now things break in three places nobody expected. If it had stayed in one place, the blast radius would have been zero.
Now think about what the AI equivalent of “functions must be small” is. What rules are being spread as gospel right now, with no proof? Things are moving fast enough that by the time we realize a dogma was wrong, half the industry has already built on top of it. This is actually the best moment we’ve had in years to slow down and think before repeating. AIs are smart, but we also have brains.
Good engineers do not ask, “Is everyone doing this?”
They ask, “Does this work for my context?”
Just Be a Good Engineer
Use AI. Use agents. Experiment aggressively.
But look around. Every tool, every platform, every framework now has its own version of the same thing. Agents, subagents, prompts, skills, modes, MCPs, commands… you name it, someone is selling their spin on it. The complexity is artificial. The race to have the most elaborate setup is just the new cargo cult.
And the irony is that models are actually good enough now that none of it matters that much. If the harness is decent and you know what the AI can and can’t do, just being a good engineer is enough. Asking the right questions. Knowing when to push back. Understanding what you are looking at.
So the last piece of advice is the oldest one. Focus on the fundamentals, and don’t overfill your tools with crap you don’t need.
When you stop questioning and start copying, you are not doing engineering anymore. You are doing cosplay.
Tools are tools, not doctrine.