Kubernetes vs Docker Part 2

I was just going to dump this in an edit at the bottom of my last post, but it was getting long. And so I decided to split it out into a separate post.

In my previous article I had concluded that, while you may need Kubernetes in a Home Lab or home server as a developer for better understanding how your code would likely be run in production, that you ultimately did not need it otherwise. I wanted to clarify that stance a little as it was a generalization.

My primary home server is a single server running 30 containers. There is a mix of infrastructure (databases, reverse proxy, etc...), file storage, media players, metrics gathering, and work related software running. 

The CPU usage hovers around 0-3% and the RAM usage is about 18GB/32GB. The CPU is nothing special. Just a Ryzen 5500. Even if the daily load were increased significantly, this server would be easily able to handle it. I have no need for replication, automatic scaling or even load balancing. 

I am not saying that Kubernetes wouldn't do some things better, even in my environment. Just that it isn't really needed and the gains are likely not worth the added complexity.

I would need, either, a lot of additional people making use of the services (hundreds to thousands) or workloads consuming more resources. And, with more users, it would likely just stress one or two containers rather than the entire ecosystem at which point it would likely make more sense to simply come up with separate server/setup for those particular systems. 

For instance, if the number of users went up drastically, Jellyfin or Nextcloud plus my storage would likely be what got hit hardest. And rather than setting up Kubernetes and trying to balance the load (which wouldn't help the disk usage anyway). I would likely want to just switch to a proper NAS solution first with the data being replicated across multiple drives. If the load was still too much for the service itself, then I would start by splitting that off to a dedicated server. Since my workloads are all containerized to start with I could still switch to Kubernetes later, if necessary. 

However, in a situation like mine, I still struggle at the value beyond understanding K8 YAML files as a developer for my job or being able to test certain load related metrics/behaviors. 

The thing about a cloud environment like AWS or Azure is the scale and the infrastructure. For them, having you run your workloads in K8s not only means auto-scaling for you, but also auto-scaling and balancing for them. They have thousands of servers and can automatically spin the hardware up or down to manage their own costs. It is a win-win if you have variations in load or you don't have your own infrastructure. 

For a Home Lab you have the infrastructure. The Home Lab IS the infrastructure. And I suspect most people just leave all servers on all the time or are just turning them on manually as needed rather than having anything automating what hardware is running based on load/needs. 


Now, even with that, I must concede that there are scenarios where it makes sense. There are some "obvious" ones. Obvious in the sense that, if this is you, then you probably aren't reading this or you know exactly why what I'm saying doesn't apply to you. Things like education or testing of very specific scenarios for example. And there are also some scenarios which aren't as obvious.

For instance, while my hardware is modest, there are certainly people who build their Home Labs with either MORE modest components (like much older systems) or with low power devices. If you have a farm of Raspberry Pis for example, or a bunch of systems with incompatible RAM you may need to build your cluster from multiple PCs as it may not be possible to combine things into a single server. Or you may want/need redundancy across multiple physical locations. 

These situations make the Kubernets setup a bit more complicated, but in such cases the gains will become tangible.

To re-iterate though... Kubernetes is most frequently associated with running containerized (AKA Docker) workloads. As such, you can start with Docker and move to Kubernetes later. My only recommendation would be to start out with Kubernetes on a separate server until you're comfortable. 

Comments

Popular Posts