3 min read
Over-engineering the deployment phase

Cool, we are finishing development phase with the team. It’s time to start planning for the production phase. Build, Test and Deploy!.

“Let me check the best scalable and maintainable way to deploy our product”. Those were my last words before I navigated and evaluated through countless options.

I need GitOps and CI/CD. Wow ArgoCD looks cool, i’m trying this one. Since we are probably using containers, then Kubernetes comes handy! But I need multiple VMs for this, I’m starting the instances right now. What am I missing…Of course, more infra! Ansible, Terraform and Grafana. The Holy Trinity.

Multiple tools, setup the proxies/load balancer, start VMs, backups, etc.. That’s a lot of work, I can see diverse engineer roles for the setup. Even though I’ve worked with those tools before at least once, the over-engineering was clear. Time to slow down.

Production environments, generally speaking, scale based on traffic. We are just deploying for the first time with little amount of effective users (that’s how most b2b software starts). Would setting up all of that architecture be worth it? Just to keep little traffic on our servers? Most businesses don’t have Uber, Google or over 1M daily user traffic.

So then, why the need of implementing so many tools? In my opinion, we forget that the most basic deploy is the best deploy. Keeping it simple. I suspect that media has a lot influence on this matter. Let’s consider that in the last couple of years, tech media has been overflowed with new tools and their “15 minutes tutorials”. People just starting to work on this field will feel the urge and need to learn everything because of the industry “standard” or “popularity”.

Probably each one of us went through this phase, which is understandable. This can lead us into the premise “If I don’t use this tool my software is not professional” or is just “bad”. And we end up with a huge lake of alternatives. Imagine trying to use 10 different tools just for your full-stack tutorial weather APP to run. For the sake of learning? Ok. But for your daily 50 users? I don’t think so.

Scalable products should already be capable to scale by design if done correctly. Focusing on the fastest deployment first should be the key in most scenarios. Get a VPS from some provider, pull your Docker image, start your containers with docker-compose and call it a day, don’t overthink. That’s more than enough, you can scale later as needed. Ship it first.

Not a hard lesson, but eye openner. Took me about half a week to understand this, but well learned for future projects. Just ship it, even if its garbage code just do it. No more to conclude.

— asz