f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Published at 2024-12-02T23:48:21+02:00

This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

We set the stage last time; this time, we will set up the hardware for this project.

These are all the posts so far:

=> 2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage | 2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)

=> f3s logo

Logo was generated by ChatGPT.

Let's continue...

Table of Contents

Deciding on the hardware

Note that the OpenBSD VMs included in the f3s setup (which will be used later in this blog series for internet ingress - as you know from the first part of this blog series) are already there. These are virtual machines that I rent at OpenBSD Amsterdam and Hetzner.

=> https://openbsd.amsterdam | https://hetzner.cloud

This means that the FreeBSD boxes need to be covered, which will later be running k3s in Linux VMs via bhyve hypervisor.

I've been considering whether to use Raspberry Pis or look for alternatives. It turns out that complete N100-based mini-computers aren't much more expensive than Raspberry Pi 5s, and they don't require assembly. Furthermore, I like that they are AMD64 and not ARM-based, which increases compatibility with some applications (e.g., I might want to virtualize Windows (via bhyve) on one of those, though that's out of scope for this blog series).

Not ARM but Intel N100

I needed something compact, efficient, and capable enough to handle the demands of a small-scale Kubernetes cluster and preferably something I don't have to assemble a lot. After researching, I decided on the Beelink S12 Pro with Intel N100 CPUs.

=> Beelink Mini S12 Pro N100 official page

The Intel N100 CPUs are built on the "Alder Lake-N" architecture. These chips are designed to balance performance and energy efficiency well. With four cores, they're more than capable of running multiple containers, even with moderate workloads. Plus, they consume only around 8W of power (ok, that's more than the Pis...), keeping the electricity bill low enough and the setup quiet - perfect for 24/7 operation.

=> Beelink preparation

The Beelink comes with the following specs:

I bought three (3) of them for the cluster I intend to build.

Beelink unboxing

Unboxing was uneventful. Every Beelink PC came with:

Overall, I love the small form factor.

Network switch

I went with the tp-link mini 5-port switch, as I had a spare one available. That switch will be plugged into my wall ethernet port, which connects directly to my fiber internet router with 100 Mbit/s down and 50 Mbit/s upload speed.

=> Switch

Installing FreeBSD

Base install

First, I downloaded the boot-only ISO of the latest FreeBSD release and dumped it on a USB stick via my Fedora laptop:

[paul@earth]~/Downloads% sudo dd \
  if=FreeBSD-14.1-RELEASE-amd64-bootonly.iso \
  of=/dev/sda conv=sync

Next, I plugged the Beelinks (one after another) into my monitor via HDMI (the resolution of the FreeBSD text console seems strangely stretched, as I am using the LG Dual Up monitor), connected Ethernet, an external USB keyboard, and the FreeBSD USB stick, and booted the devices up. With F7, I entered the boot menu and selected the USB stick for the FreeBSD installation.

The installation was uneventful. I selected:

After doing all that three times (once for each Beelink PC), I had three ready-to-use FreeBSD boxes! Their hostnames are f0, f1 and f2!

=> Beelink installation

Latest patch level and customizing /etc/hosts

After the first boot, I upgraded to the latest FreeBSD patch level as follows:

root@f0:~ # freebsd-update fetch
root@f0:~ # freebsd-update install
root@f0:~ # freebsd-update reboot

I also added the following entries for the three FreeBSD boxes to the /etc/hosts file:

root@f0:~ # cat <>/etc/hosts
192.168.1.130 f0 f0.lan f0.lan.buetow.org
192.168.1.131 f1 f1.lan f1.lan.buetow.org
192.168.1.132 f2 f2.lan f2.lan.buetow.org
END

You might wonder why bother using the hosts file? Why not use DNS properly? The reason is simplicity. I don't manage 100 hosts, only a few here and there. Having an OpenWRT router in my home, I could also configure everything there, but maybe I'll do that later. For now, keep it simple and straightforward.

After install

After that, I installed the following additional packages:

root@f0:~ # pkg install helix doas zfs-periodic uptimed

Helix editor

Helix? It's my favourite text editor. I have nothing against vi but like hx (Helix) more!

=> https://helix-editor.com/

doas

root@f0:~ # cp /usr/local/etc/doas.conf.sample /usr/local/etc/doas.conf

=> https://man.openbsd.org/doas

### Periodic ZFS snapshotting
root@f0:~ # cat <>/etc/periodic.conf
daily_zfs_snapshot_enable="YES"
daily_zfs_snapshot_pools="zroot"
daily_zfs_snapshot_keep="7"
weekly_zfs_snapshot_enable="YES"
weekly_zfs_snapshot_pools="zroot"
weekly_zfs_snapshot_keep="5"
monthly_zfs_snapshot_enable="YES"
monthly_zfs_snapshot_pools="zroot"
monthly_zfs_snapshot_keep="6"
END

=> https://github.com/ross/zfs-periodic

Uptime tracking

root@f0:~ # cp /usr/local/mimecast/etc/uptimed.conf-dist \

/usr/local/mimecast/etc/uptimed.conf

root@f0:~ # hx /usr/local/mimecast/etc/uptimed.conf

In the Helix editor session, I changed `LOG_MAXIMUM_ENTRIES` to `0` to keep all uptime entries forever and not cut off at 50 (the default config). After that, I enabled and started `uptimed`:

root@f0:~ # service uptimed enable

root@f0:~ # service uptimed start

To check the current uptime stats, I can now run `uprecords`:

root@f0:~ # uprecords

 #               Uptime | System                                     Boot up

----------------------------+---------------------------------------------------

-> 1 0 days, 00:07:34 | FreeBSD 14.1-RELEASE Mon Dec 2 12:21:44 2024

----------------------------+---------------------------------------------------

NewRec 0 days, 00:07:33 | since Mon Dec 2 12:21:44 2024

up     0 days, 00:07:34 | since                     Mon Dec  2 12:21:44 2024

down 0 days, 00:00:00 | since Mon Dec 2 12:21:44 2024

%up 100.000 | since Mon Dec 2 12:21:44 2024

This is how I track the uptimes for all of my host:

=> ./2023-05-01-unveiling-guprecords:-uptime-records-with-raku.gmi Unveiling `guprecords.raku`: Global Uptime Records with Raku-
=> https://github.com/rpodgorny/uptimed

# Hardware check

## Ethernet

Works. Nothing eventful, really. It's a cheap Realtek chip, but it will do what it is supposed to do.

paul@f0:~ % ifconfig re0

re0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500

    options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>

    ether e8:ff:1e:d7:1c:ac

    inet 192.168.1.130 netmask 0xffffff00 broadcast 192.168.1.255

    inet6 fe80::eaff:1eff:fed7:1cac%re0 prefixlen 64 scopeid 0x1

    inet6 fd22:c702:acb7:0:eaff:1eff:fed7:1cac prefixlen 64 detached autoconf

    inet6 2a01:5a8:304:1d5c:eaff:1eff:fed7:1cac prefixlen 64 autoconf pltime 10800 vltime 14400

    media: Ethernet autoselect (1000baseT <full-duplex>)

    status: active

    nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>

## RAM

All there:

paul@f0:~ % sysctl hw.physmem

hw.physmem: 16902905856

## CPUs

They work:

paul@f0:~ % sysctl dev.cpu | grep freq:

dev.cpu.3.freq: 705

dev.cpu.2.freq: 705

dev.cpu.1.freq: 604

dev.cpu.0.freq: 604

## CPU throttling

With `powerd` running, CPU freq is dowthrottled when the box isn't jam-packed. To stress it a bit, I run `ubench` to see the frequencies being unthrottled again:

paul@f0:~ % doas pkg install ubench

paul@f0:~ % rehash # For tcsh to find the newly installed command

paul@f0:~ % ubench &

paul@f0:~ % sysctl dev.cpu | grep freq:

dev.cpu.3.freq: 2922

dev.cpu.2.freq: 2922

dev.cpu.1.freq: 2923

dev.cpu.0.freq: 2922

Proxy Information
Original URL
gemini://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi
Status Code
Success (20)
Meta
text/gemini;
Capsule Response Time
452.34644 milliseconds
Gemini-to-HTML Time
5.175084 milliseconds

This content has been proxied by September (ba2dc).