Most of the “Kubernetes is overkill” takes on LinkedIn are misleading.
A lot of the time, the people saying this were brought into companies where founders had burned through VC money on poor infrastructure decisions. Then comes the mandate to cut costs and justify the cleanup effort. The easiest punching bag becomes Kubernetes.
Why?#
Because managed Kubernetes offerings are expensive on paper. You pay for the control plane, then you pay again for the worker nodes underneath it.
But the comparison people make is rarely fair.
1. ECS, Cloud Run, etc. are only “simple” if your team already understands the ecosystem around them#
Take ECS for example.
To deploy even a moderately production-ready application, you may end up setting up:
- An ALB
- Target groups
- IAM roles
- Secrets Manager or SSM
- Service discovery
- Autoscaling policies
- Deployment automation wiring
- Separate integrations for observability and networking
A very simple example is scheduled jobs.
In Kubernetes, you define a CronJob manifest.
That’s it.
The cluster handles:
- Scheduling
- Retries
- Execution
- Cleanup
- Logging (uniform across the cluster once configured)
- Basic networking via your CNI’s NetworkPolicies
- Advanced networking via a service mesh
All within the same API model as the rest of your workloads.
In ECS, scheduled workloads usually become:
- EventBridge schedules
- ECS RunTask targets
- IAM roles
- Networking config
- Logging config
Need conditional execution or chaining? Now Step Functions or Lambda enters the picture.
None of this is “hard.”
But operationally, it spreads one logical workflow across multiple AWS services.
Want progressive delivery pipelines?
Now you’re wiring together:
- CodeDeploy
- ALB weighted target groups
- CloudWatch alarms
- Deployment hooks
- Rollback logic
In Kubernetes, most of this becomes cluster-native abstractions:
- Jobs
- Deployments
- Probes
- Rollout strategies
- Argo Rollouts / Flagger
- Operators
The cluster itself becomes the deployment platform instead of stitching together multiple cloud services.
2. Kubernetes gives you one consistent model#
You have:
Everything lives inside one control plane with a unified API model.
That consistency is the real selling point of Kubernetes, not just “containers at scale.”
3. ASGs and Instance Groups put you back at square one#
Another recommendation people throw around is ASGs or Instance Groups.
At that point, we’re back to square one: running applications directly on VMs.
One of the strongest reasons to adopt an orchestrator in the first place is to stop thinking about VM management all day:
- Better compute sharing
- Workload scheduling
- Self-healing
- Service abstraction
- Easier scaling semantics
If your answer is “just use VMs,” then ask whether you actually needed orchestration at all.
4. If you truly want a Kubernetes alternative, look at Nomad#
If someone truly wants a Kubernetes alternative, HashiCorp Nomad is probably the closest thing in terms of convenience and operational philosophy.
It’s lighter operationally, supports non-containerized workloads, and feels far less overwhelming initially.
The biggest downside is ecosystem is not as vibrant and managed offerings are very few. Kubernetes won the ecosystem war long ago.
In the HPC (High Performance Computing) world SLURM, OpenPBS etc. are more popular and designed with HPC needs in mind.
So what am I implying here?#
Am I saying “always use Kubernetes”?
No.
Am I saying “Kubernets is always easy”?
No.
What I’m saying is: choose what fits.
Ask:
- Do you even need an orchestrator?
- Does the extra control plane cost justify the sanity of your team?
- Is your team already comfortable with Kubernetes?
- Are you optimizing for portability, hiring, ecosystem, or simplicity?
- Is your scale operational complexity, or just traffic volume?
- How many services do you actually have?
Ask the right questions.
Don’t take constipation pills when you have diarrhea.

