Homelab
For a long while, I have run a few services on my homelab. At least, that’s the way I phrase it when I am overselling the sophistication of my setup.
It started years ago with a single Raspberry Pi that was not my “main” computer. I found it liberating to have a second device that I could break without having to immediately fix it so I could continue to be productive. Over time, the shape of my homelab has changed periodically, but I like to think it’s still pretty scrappy.
The Hardware
-
A desktop computer that I built myself from parts somewhere around 2009. Core i7 processor, somewhere along the way it got upgraded to 24 GB of RAM, and a GeForce GTX 1050 graphics card.
-
A Raspberry Pi 4, with 4 GB of RAM.
-
A Raspberry Pi 3.
-
A “Gigabit Switch” that I got online for about $25.
Concepts of a Plan
Instead of continuing to run services on these devices in the “traditional” way, I decided I wanted to move everything to Kubernetes.
If your question is, “Why?” you should probably stop reading now. I have been using Kubernetes in my day job for years, so I felt confident I could do it, and that I would probably learn something in the process. Even as the person who [spoiler alert] did it, I would be pretty skeptical of anyone who told me it was a good idea.
Let’s just say I did it for fun.
Flavors of Kubernetes
I considered several different ways to get a Kubernetes cluster up and running. First, I considered “The Hard Way.” I still hope to go through that process one day, but as a father who is training for a marathon, my free time tends to come in short spurts, and I would rather spend my time on deploying services than managing infrastructure directly. So, I moved on to the big three ways to deploy little clusters.
Kind
Although their branding no longer reflects it, the name used to be stylized KinD (Kubernetes in Docker). I use Kind all the time because it’s a quick, easy way to create a cluster on a machine that isn’t just a Kubernetes host. It’s an official project under a Kubernetes SIG, and well supported. Kind is especially useful for developing and testing Kubernetes itself, although there are much wider use cases.
Minikube
Another offical Kubernetes SIG project. This one is designed for scenarios where you are really just looking for a single cluster to use as a playground. Great for people learning to use Kubernetes for the first time.
K3s
“The certified Kubernetes distribution built for IoT & Edge computing.”
K3s has public releases dating back to 2019, and they care about running Kubernetes in constrained environments. Minimum system requirements are 2 cores and 2 GB of RAM for a server node, and 1 core/512MB RAM for an “agent.” Their docs even cover installation on a Raspberry Pi.
To be fair, Kind and Minikube have similar system requirements, but without the documentation for Raspberry Pi. Since I planned to experiment on the Pi before distrupting services on the desktop, I found that persuasive.
5 Services
There were five core services I wanted to spin up quickly.
Traefik
Traefik is bundled with K3s by default, and it serves as the ingress controller for the cluster. Compared to my previous setup, it replaces an Nginx reverse proxy.
Praise Point: It basically just works, out of the K3s box.
Pain Point: CRDs are duplicated between traefik.containo.us
(used through
Traefik v2) and traefik.io
(used with Traefik v3) groups. The newer group is
not first alphabetically, so in k9s every time I tried to see an IngressRoute,
it took me to the old CRD, which was empty. This is insanely annoying, and I
think it might be fixed with TraefikV3, which is not yet bundled with K3s.
One more gotcha came with trying to get Traefik to listen for both TCP and UDP DNS traffic on the same port. This was actually a JSON merge issue inherited by Kubernetes. Long story short, I had to delete the Traefik service and recreate it, rather than patching the existing resource.
Cert-Manager
I use real domain names I own to point to some of the services I host in my homelab. It’s possible to serve plain HTTP, but I prefer to use TLS (HTTPS), even on my home lab. Doing so requires certificates. Compared to my previous setup, Cert-Manager replaces Certbot.
Setup went pretty smoothly, except that it took me a moment to realize each namespace needs an Issuer set up with a secret for the API token, so there is some significant repetition.
CoreDNS
CoreDNS is the default DNS service that runs in a Kubernetes cluster. I decided to abuse the Kubernetes instance to also resolve names for my local network. Compared to my previous setup, CoreDNS replaces dnsmasq.
This is a bad idea for many reasons, including that it gives you the opportunity to break your cluster and your home network at the same time. However, it felt like A fun challenge and I didn’t want to run two DNS servers, so I made it work.
CoreDNS on K3s can be configured with a structure ConfigMap called
coredns-custom
. Once I figured out that only the file
plugin in CoreDNS
can be used to serve authoritative records, the rest started to click.
I also had to remove the reference to the node’s resolv.conf
from the default
configuration because that was adding a search domain to every query, slowing
external queries to 1-3 seconds. This seems to be a known issue. Performance
vastly improved after that change.
Finally, setting up forwarding for the root domain to external servers (e.g. Cloudflare’s 1.1.1.1) prevented infinite loops between my router and the DNS server.
Prometheus+
As someone who maintains a Prometheus fork, building some observability into my homelab felt like a natural fit.
I decided to try give kube-prometheus a try this time. After sorting through various options, I landed on the kube-prometheus-stack Helm chart to install it, which came bundled with Prometheus, Prometheus Operator, Kube State Metrics, Prometheus Node Exporter, and Grafana.
Overall, installation went smoothly and it just started collecting metrics from a variety of places and displaying pretty charts. This was a pretty good out of the box experience, but a little intimidating to build on. I basically left all of that alone. Although, the CoreDNS dashboard was helpful in debugging some of the issues I mentioned.
Immich
Finally, a real service that isn’t just enabling or enhacing other services. Immich is a, “Self-hosted photo and video management solution.”
I think of it as a self-hosted alternative (/addition) to something like Google Photos. One more place to back up baby pictures in case I lose access to them elsewhere. Immich also had a Helm chart to install, so I did that. Turns out, there are some settings worth tweaking. Like setting up the database with password and a reference to a secret instead of plaintext in the manifest. Realistically, if a bad actor is playing around in my home lab Kubernetes cluster, that will probably be the least of my worries. But it’s still nice to follow some basic security practices. They’re not practices if you don’t practice.
It all comes crashing down
Shortly after getting all of this set up, my trusty 15-year-old computer took a turn for the worse after a power surge and outage caused by four teenagers in a stolen SUV colliding with a power transformer nearby. Yes, really.
So, my next task will be setting it all up again. With any luck, the data from the hard drives will be accessible. Theoretically, it could be as easy as recovering the data, installing K3s, and reapplying all the manifests (which I saved). Will it actually be that easy? Stay tuned.