My home lab is a personal setup of computer and networking equipment, used for testing, experimenting, and learning about different technologies.
I have been running a Docker/Portainer based server at home on a Intel Atom NUC since 2023 - simple to set up and easy to host applications that provide docker-compose configuration files. During the Christmas holidays of 2024, I upgraded from Docker to Kubernetes (K8S). This page serves as a log how I set up the cluster.
The general goal is to have a environment for:
I bought three identical BeeLink MINI S12 - these mini PCs are cheap and powerful:
Product Page:
=> Beelink MINI S12 Intel® Alder Lake-N N95
The systems are pre-installed with Windows 11 Pro so I needed to get rid of that and install Ubuntu. Common practice is to use Balena Etcher to build an USB boot stick based on a Ubuntu LTS Server (I used 24.04). I did not manage to boot with such a stick and after hours of testing I used Rufus instead. Rufus has extra options for creating (more) compatible boot sticks and activating them finally had the BeeLink booting.
=> Rufus "Create bootable USB drives the easy way"
I am using a small 5 port TP-Link Gigabit Switch to link the 3 systems. One of the switch's ports is connect to my home LAN.
Each system gets its network configuration via LAN DHCP, but the ethernet interfaces have an additional static alias IP on a private network. These private IP addresses are used between the Kubernetes nodes to connect to each other. The reason for that is to be able to "move" the cluster into a new/different LAN, should the need ever arrive.
Next step was installing the K8S. I chose the Rancher "K3S" distribution - reasons being (from the K3S page):
K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations [..]
K3s is packaged as a single <70MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster.
[..] K3s works great on something as small as a Raspberry Pi to an AWS a1.4xlarge 32GiB server.
Setup is simple, I just followed the instructions. Note that I used the private static (aliased) IPs to interconnect.
A bare bones K8S provides only local node mounts as storage solution for persistent volumes. A node or underlying ssd failure results in data loss. There are multiple implementations for persistent volumes. I evaluated Ceph, but this seemed to be quite demanding on CPU and RAM resources, so I finally settled on using Longhorn.
=> Longhorn - Cloud native distributed block storage for Kubernetes
Install via Helm:
helm repo add longhorn https://charts.longhorn.io helm repo update helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.7.2
I let longhorn do the off-site backups to Backblaze, too.
=> "Configuring S3 backup for Longhorn" by João Rocha / 2024-05-30
Sample config:
apiVersion: v1 kind: Secret metadata: name: backup-secret namespace: longhorn-system type: Opaque data: AWS_ENDPOINTS: ***REDACTED*** AWS_ACCESS_KEY_ID: ***REDACTED*** AWS_SECRET_ACCESS_KEY: ***REDACTED***
It is of course possible to manage a K8S only via CLI, but sometimes you just want a nifty GUI for looking up infos on pods and services. I use Portainer: Lightweight and easy to install.
helm upgrade --install --create-namespace -n portainer portainer portainer/portainer --set image.tag=2.21.5
After research several options (such as MetalLB - awesome, but rather complicated) I settled using Tailscale as simple way to expose / access services in the Cluster.
=> Official docs: Tailscale on Kubernetes | "Free Kubernetes Load Balancers" by Lee Briggs, Published Feb 26, 2024
Basically configuring via Web UI and then:
helm repo add tailscale https://pkgs.tailscale.com/helmcharts # add the helm chart repo helm repo update # update the repo helm upgrade \ --install \ tailscale-operator \ tailscale/tailscale-operator \ --namespace=tailscale \ --create-namespace \ --set-string oauth.clientId=***REDACTED*** \ --set-string oauth.clientSecret=***REDACTED***
With Tailscale, the K8S services are exposed to the tailnet, but not (yet) available on the intnernet. For this purpose, I rent a small Hetzner cloud instance running Ubuntu LTS & Nginx, terminating TLS (via Lets Encrypt / Certbot) and reverse proxying some services to the Tailscale network (sample config):
server { server_name git.farcaster.net; client_max_body_size 128M; # important for docker registry API, large layers! location / { proxy_pass http://gitea-gitea-http:3000; # Tailscale K8S Operator Machine endpoint # Setting the Host header to preserve the original request proxy_set_header Host $host; # Headers to pass the original IP address and request details proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/git.farcaster.net/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/git.farcaster.net/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot }
The goal of this project is to make the easiest, fastest, and most painless way of setting up a self-hosted Git service.
Why do I need this? First: I want to use self-built container images in my cluster (and not fetch them from the internet). Second: Using ArgoCD to manage K8S apps needs a git repository.
Configure K3S to (globally) use the Gitea private docker registry (on all K3S nodes):
#/etc/rancher/k3s/registries.yaml mirrors: "git.farcaster.net": endpoint: - "https://git.farcaster.net" configs: "git.farcaster.net": auth: username: "***REDACTED***" password: "***REDACTED***"
From the ArgoCD Website:
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
=> ArgoCD Application Dashboard UI
So with ArgoCD, you basically connect your Git and on push events ArgoCD transforms the state of your application in the cluster according to the specified K8S manifests or Helm charts.
=> ArgoCD Docs
On of the challenges with GitOps is how to handle secrets - plain text passwords and such should of course never be committed to git repositories! One simple solution is to use public key encryption: You encrypt the data locally and add only the encrypted secrets to git. A controller in the K8S then takes care of decryption of the SealedSecret custom resource manifest into a "real" Secret. Details on the linked site:
Installation:
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.27.3/controller.yaml brew install kubeseal
Usage:
# create a secret (note use of `--dry-run` - this is just a local file!) echo -n secretvalue | kubectl create secret generic mysecret --dry-run=client --from-file=mysecretkey=/dev/stdin -o json >mysecret.json # This is the important bit: kubeseal -f mysecret.json -w mysealedsecret.json # At this point mysealedsecret.json is safe to upload to Github, # post on Twitter, etc. # add another key to the secret: echo -n baz | kubectl create secret generic mysecret --dry-run=client --from-file=bar=/dev/stdin -o json \ | kubeseal --merge-into mysealedsecret.json # Eventually: kubectl create -f mysealedsecret.json # Profit! kubectl get secret mysecret
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
My use-case is to auto-update services within the cluster that basically just use container "IMAGE:latest" as image/tag. Keel.sh polls the SHA checksums of the images and re-starts the pods whenever necessary. For this to work, some annotations need to be specified:
apiVersion: apps/v1 kind: Deployment metadata: name: open-webui-deployment namespace: open-webui annotations: keel.sh/policy: force keel.sh/trigger: poll keel.sh/pollSchedule: "@midnight"
Make sure to have "imagePullPolicy: Always" on your containers!
Keel doesn't need a database. Keel doesn't need persistent disk. It gets all required information from your cluster. This makes it truly cloud-native and easy to deploy.
In order to install Keel, just follow their instructions.
=> Keel - Kubernetes Operator for auto container updates
How to get arbitrary apps into the cluster? I could have used iptables with port forwarding, but I chose to use the nginx stream module instead.
Sample: Forward gemini traffic from my internet Hetzner system to my K8S Gemini loadbalancer
# make sure nginx stream module is enabled and installed: apt install nginx-full # add this to: 99-stream-gemini.conf in /etc/nginx/modules-enabled stream { upstream backend { server websites-gmid-loadbalancer:1965; } server { listen 1965; proxy_pass backend; } }
What do I run on my cluster? Some of the more permanent things include:
=> Discuss this topic on Mastodon This content has been proxied by September (ba2dc).Proxy Information
text/gemini