Moving the home lab to Kubernetes

Kubernetes has become something of the standard for the orchestration of containers. While there are certainly other options, the Kubernetes platform remains the most prevalent. With that in mind, I decided to migrate my home lab from docker servers to Kubernetes clusters.

Before: Docker Servers

Long story short: my home lab has transitioned from Windows servers running IIS to a mix of Linux and Windows containers to Linux only containers. The apps are containerized, but the DBs still run on some SQL servers.

Build and deployment is automated: Build through Azure DevOps Pipelines & Self Hosted Agents (Teamcity before that), and deployment through Octopus Deploy. Container images for my projects live on a Proget server feed.

The Plan

“Consolidate” (and I’ll tell you later why that is in quotes) my servers into Kubernetes Clusters. It seemed an easy plan.

  • Internal K8 Cluster – Runs Rancher and any internal tooling (including Elastic/Kibana) I want to be there, but not available externally
  • Non Production K8 Cluster – Runs my *.net and *.org sites, used for test and staging environments
  • Production K8 Cluster – Runs my *.com sites (external) including any external tooling.

I spent some time learning Packer to provision Hyper-V vms for my clusters. The clusters all ended up with a control plane (4vCPU, 8GB RAM) and two workers (2vCPU, 6GB RAM).

The Results

The Kubernetes Clusters

There was a LOT of trial and error in getting Kubernetes going, particularly with Rancher. So much, in fact, that I probably provisioned the clusters 3 or 4 times each because I felt like I messed up and wanted to do it over again.

Initially, I tried to manually provision the K8 cluster. Yes, it worked.. but RKE is nicer. And, after my manually provisioned K8 cluster went down, I provisioned the internal cluster with RKE. That makes updates easier, as I have the config file.

I provisioned the non-production and production clusters using the Rancher GUI. However, that was the “manually provisioned” cluster, so, when it went down, I lost the config files. I currently have two clusters which look like “imported” clusters in Rancher, so they are harder to manage through the Rancher GUI.


In order to utilize persistent volume claims, I configured NFS on my Synology and installed the nfs-subdir-external-provisioner in all of my clusters. It installs a storage class which can be used in persistent volume claims, and will provision directories in my NFS.


Right now, I’m using the Nginx Ingress controller from Rancher. I haven’t played with it much, other than the basics. Perhaps more on that when I dig in.

Current Impressions


It works… but mine is flaky. I think it may be due to some resource starvation. I may try to provision a new internal cluster with better VMs and see how that works.

I do like the deployment of clusters using RKE, however, I can see how it would be difficult to manage when there is more than one person involved.


Once it was running, it’s great: creating new APIs or apps and getting them running in a scalable fashion is easy. Helm charts make deployment and updating a snap.

That said, I would not trust myself to run this in production without a LOT more training.








One response to “Moving the home lab to Kubernetes”

  1. Nick Hill Avatar
    Nick Hill

    This toolchain looks familiar 😉

Leave a Reply

Your email address will not be published. Required fields are marked *