<updated>2025-01-19T13:21:25+02:00</updated>

<title>foo.zone feed</title>

<subtitle>To be in the .zone!</subtitle>

<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />

<link href="gemini://foo.zone/" />

<id>gemini://foo.zone/</id>

<entry>

    <title>Working with an SRE Interview</title>

    <link href="gemini://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.gmi" />

    <id>gemini://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.gmi</id>

    <updated>2025-01-15T00:16:04+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I have been interviewed by Florian Buetow on `cracking-ai-engineering.com`  about what it's like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='working-with-an-sre-interview'>Working with an SRE Interview</h1><br />

Published at 2025-01-15T00:16:04+02:00

I have been interviewed by Florian Buetow on cracking-ai-engineering.com about what it's like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.

See original interview here

Cracking AI Engineering

Below, I am posting the interview here on my blog as well.

In this insightful interview, Paul Bütow, a Principal Site Reliability Engineer at Mimecast, shares over a decade of experience in the field. Paul highlights the role of an Embedded SRE, emphasizing the importance of automation, observability, and effective incident management. We also focused on the key question of how you can work effectively with an SRE weather you are an individual contributor or a manager, a software engineer or data scientist. And how you can learn more about site reliability engineering.

Hi Paul, please introduce yourself briefly to the audience. Who are you, what do you do for a living, and where do you work?

My name is Paul Bütow, I work at Mimecast, and I’m a Principal Site Reliability Engineer there. I’ve been with Mimecast for almost ten years now. The company specializes in email security, including things like archiving, phishing detection, malware protection, and spam filtering.

You mentioned that you’re an ‘Embedded SRE.’ What does that mean exactly?

It means that I’m directly part of the software engineering team, not in a separate Ops department. I ensure that nothing is deployed manually, and everything runs through automation. I also set up monitoring and observability. These are two distinct aspects: monitoring alerts us when something breaks, while observability helps us identify trends. I also create runbooks so we know what to do when specific incidents occur frequently.

Infrastructure SREs on the other hand handle the foundational setup, like providing the Kubernetes cluster itself or ensuring the operating systems are installed. They don't work on the application directly but ensure the base infrastructure is there for others to use. This works well when a company has multiple teams that need shared infrastructure.

How did your interest in Linux or FreeBSD start?

It began during my school days. We had a PC with DOS at home, and I eventually bought Suse Linux 5.3. Shortly after, I discovered FreeBSD because I liked its handbook so much. I wanted to understand exactly how everything worked, so I also tried Linux from Scratch. That involves installing every package manually to gain a better understanding of operating systems.

https://www.FreeBSD.org

https://linuxfromscratch.org/

And after school, you pursued computer science, correct?

Exactly. I wasn’t sure at first whether I wanted to be a software developer or a system administrator. I applied for both and eventually accepted an offer as a Linux system administrator. This was before 'SRE' became a buzzword, but much of what I did back then-automation, infrastructure as code, monitoring-is now considered part of the typical SRE role.

Tell us about how you joined Mimecast. When did you fully embrace the SRE role?

I started as a Linux sysadmin at 1&1. I managed an ad server farm with hundreds of systems and later handled load balancers. Together with an architect, we managed F5 load balancers distributing around 2,000 services, including for portals like web.de and GMX. I also led the operations team technically for a while before moving to London to join Mimecast.

At Mimecast, the job title was explicitly 'Site Reliability Engineer.' The biggest difference was that I was no longer in a separate Ops department but embedded directly within the storage and search backend team. I loved that because we could plan features together-from automation to measurability and observability. Mimecast also operates thousands of physical servers for email archiving, which was fascinating since I already had experience with large distributed systems at 1&1. It was the right step for me because it allowed me to work close to the code while remaining hands-on with infrastructure.

What are the differences between SRE, DevOps, SysAdmin, and Architects?

SREs are like the next step after SysAdmins. A SysAdmin might manually install servers, replace disks, or use simple scripts for automation, while SREs use infrastructure as code and focus on reliability through SLIs, SLOs, and automation. DevOps isn’t really a job-it’s more of a way of working, where developers are involved in operations tasks like setting up CI/CD pipelines or on-call shifts. Architects focus on designing systems and infrastructures, such as load balancers or distributed systems, working alongside SREs to ensure the systems meet the reliability and scalability requirements. The specific responsibilities of each role depend on the company, and there is often overlap.

What are the most important reliability lessons you’ve learned so far?

Runbooks sound very practical. Can you explain how they’re used day-to-day?

Runbooks are essentially guides for handling specific incidents. For instance, if a service won’t start, the runbook will specify where the logs are and which commands to use. Observability takes it a step further, helping us spot changes early-like rising error rates or latency-so we can address issues before they escalate.

When should you decide to put something into a runbook, and when is it unnecessary?

If an issue happens frequently, it should be documented in a runbook so that anyone, even someone new, can follow the steps to fix it. The idea is that 90% of the common incidents should be covered. For example, if a service is down, the runbook would specify where to find logs, which commands to check, and what actions to take. On the other hand, rare or complex issues, where the resolution depends heavily on context or varies each time, don’t make sense to include in detail. For those, it’s better to focus on general troubleshooting steps.

How do you search for and find the correct runbooks?

Runbooks should be linked directly in the alert you receive. For example, if you get an alert about a service not running, the alert will have a link to the runbook that tells you what to check, like logs or commands to run. Runbooks are best stored in an internal wiki, so if you don’t find the link in the alert, you know where to search. The important thing is that runbooks are easy to find and up to date because that’s what makes them useful during incidents.

Do you have an interesting war story you can share with us?

Sure. At 1&1, we had a proprietary ad server software that ran a SQL query during startup. The query got slower over time, eventually timing out and preventing the server from starting. Since we couldn’t access the source code, we searched the binary for the SQL and patched it. By pinpointing the issue, a developer was able to adjust the SQL. This collaboration between sysadmin and developer perspectives highlights the value of SRE work.

You’re embedded in a team-how does collaboration with developers work practically?

We plan everything together from the start. If there’s a new feature, we discuss infrastructure, automated deployments, and monitoring right away. Developers are experts in the code, and I bring the infrastructure expertise. This avoids unpleasant surprises before going live.

How about working with data scientists or ML engineers? Are there differences?

The principles are the same. ML models also need to be deployed and monitored. You deal with monitoring, resource allocation, and identifying performance drops. Whether it’s a microservice or an ML job, at the end of the day, it’s all running on servers or clusters that must remain stable.

What about working with managers or the FinOps team?

We often discuss costs, especially in the cloud, where scaling up resources is easy. It’s crucial to know our metrics: do we have enough capacity? Do we need all instances? Or is the CPU only at 5% utilization? This data helps managers decide whether the budget is sufficient or if optimizations are needed.

Do you have practical tips for working with SREs?

Yes, I have a few:

Let’s talk about AI. How do you use it in your daily work?

For boilerplate code, like Terraform snippets, I often use ChatGPT. It saves time, although I always review and adjust the output. Log analysis is another exciting application. Instead of manually going through millions of lines, AI can summarize key outliers or errors.

Do you think AI could largely replace SREs or significantly change the role?

I see AI as an additional tool. SRE requires a deep understanding of how distributed systems work internally. While AI can assist with routine tasks or quickly detect anomalies, human expertise is indispensable for complex issues.

What resources would you recommend for learning about SRE?

The Google SRE book is a classic, though a bit dry. I really like 'Seeking SRE,' as it offers various perspectives on SRE, with many practical stories from different companies.

https://sre.google/books/

Seeking SRE

Do you have a podcast recommendation?

The Google SRE prodcast is quite interesting. It offers insights into how Google approaches SRE, along with perspectives from external guests.

https://sre.google/prodcast/

You also have a blog. What motivates you to write regularly?

Writing helps me learn the most. It also serves as a personal reference. Sometimes I look up how I solved a problem a year ago. And of course, others tackling similar projects might find inspiration in my posts.

What do you blog about?

Mostly technical topics I find exciting, like homelab projects, Kubernetes, or book summaries on IT and productivity. It’s a personal blog, so I write about what I enjoy.

To wrap up, what are three things every team should keep in mind for stability?

First, maintain runbooks and documentation to avoid chaos at night. Second, automate everything-manual installs in production are risky. Third, define SLIs, SLOs, and SLAs early so everyone knows what we’re monitoring and guaranteeing.

Is there a motto or mindset that particularly inspires you as an SRE?

"Keep it simple and stupid"-KISS. Not everything has to be overly complex. And always stay curious. I’m still fascinated by how systems work under the hood.

Where can people find you online?

You can find links to my socials on my website paul.buetow.org

I regularly post articles and link to everything else I’m working on outside of work.

https://paul.buetow.org

Thank you very much for your time and this insightful interview into the world of site reliability engineering

My pleasure, this was fun.

Dear reader, I hope this conversation with Paul Bütow provided an exciting peak into the world of Site Reliability Engineering. Whether you’re a software developer, data scientist, ML engineer, or manager, reliable systems are always a team effort. Hopefully, you’ve taken some insights or tips from Paul’s experiences for your own team or next project. Thanks for joining us, and best of luck refining your own SRE practices!

E-Mail your comments to paul@nospam.buetow.org or contact Florian via the Cracking AI Engineering :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Posts from October to December 2024</title>

    <link href="gemini://foo.zone/gemfeed/2025-01-01-posts-from-october-to-december-2024.gmi" />

    <id>gemini://foo.zone/gemfeed/2025-01-01-posts-from-october-to-december-2024.gmi</id>

    <updated>2024-12-31T18:09:58+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Happy new year!</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='posts-from-october-to-december-2024'>Posts from October to December 2024</h1><br />

Published at 2024-12-31T18:09:58+02:00

Happy new year!

These are my social media posts from the last three months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay.

These are from Mastodon and LinkedIn. Have a look at my about page for my social media profiles. This list is generated with Gos, my social media platform sharing tool.

My about page

https://codeberg.org/snonux/gos

First on-call experience in a startup. Doesn't sound a lot of fun! But the lessons were learned! #sre

ntietz.com/bl...irst-on-call/

Reviewing your own PR or MR before asking others to review it makes a lot of sense. Have seen so many silly mistakes which would have been avoided. Saving time for the real reviewer.

www.jvt.me/po...-code-review/

Fun with defer in #golang, I did't know, that a defer object can either be heap or stack allocated. And there are some rules for inlining, too.

victoriametri.../defer-in-go/

I have been in incidents. Understandably, everyone wants the issue to be resolved as quickly and others want to know how long TTR will be. IMHO, providing no estimates at all is no solution either. So maybe give a rough estimate but clearly communicate that the estimate is rough and that X, Y, and Z can interfere, meaning there is a chance it will take longer to resolve the incident. Just my thought. What's yours?

firehydrant.c...on-estimates/

Little tips using strings in #golang and I personally think one must look more into the std lib (not just for strings, also for slices, maps,...), there are tons of useful helper functions.

www.calhoun.i...trings-in-go/

Reading this post about #rust (especially the first part), I think I made a good choice in deciding to dive into #golang instead. There was a point where I wanted to learn a new programming language, and Rust was on my list of choices. I think the Go project does a much better job of deciding what goes into the language and how. What are your thoughts?

josephg.com/b...writing-rust/

The opposite of #ChaosMonkey ... automatically repairing and healing services helping to reduce manual toil work. Runbooks and scripts are only the first step, followed by a fully blown service written in Go. Could be useful, but IMHO why not rather address the root causes of the manual toil work? #sre

blog.cloudfla...t-cloudflare/

I just became a Silver Patreon for OSnews. What is OSnews? It is an independent news site about IT. It is slightly independent and, at times, alternative. I have enjoyed it since my early student days. This one and other projects I financially support are listed here:

foo.zone/gemf...i-support.gmi (Gemini)

foo.zone/gemf...-support.html

Until now, I wasn't aware, that Go is under a BSD-style license (3-clause as it seems). Neat. I don't know why, but I always was under the impression it would be MIT. #bsd #golang

go.dev/LICENSE

These are some book notes from "Staff Engineer" – there is some really good insight into what is expected from a Staff Engineer and beyond in the industry. I wish I had read the book earlier.

foo.zone/gemf...ook-notes.gmi (Gemini)

foo.zone/gemf...ok-notes.html

Looking at #Kubernetes, it's pretty much following the Unix way of doing things. It has many tools, but each tool has its own single purpose: DNS, scheduling, container runtime, various controllers, networking, observability, alerting, and more services in the control plane. Everything is managed by different services or plugins, mostly running in their dedicated pods. They don't communicate through pipes, but network sockets, though. #k8s

There has been an outage at the upstream network provider for OpenBSD.Amsterdam (hoster, I am using). This was the first real-world test for my KISS HA setup, and it worked flawlessly! All my sites and services failed over automatically to my other #OpenBSD VM!

foo.zone/gemf...h-OpenBSD.gmi (Gemini)

foo.zone/gemf...-OpenBSD.html

openbsd.amsterdam/

One of the more confusing parts in Go, nil values vs nil errors: #golang

unexpected-go...l-errors.html

Agreeably, writing down with Diagrams helps you to think things more through. And keeps others on the same page. Only worth for projects from a certain size, IMHO.

ntietz.com/bl...-design-docs/

I like the idea of types in Ruby. Raku is supports that already, but in Ruby, you must specify the types in a separate .rbs file, which is, in my opinion, cumbersome and is a reason not to use it extensively for now. I believe there are efforts to embed the type information in the standard .rb files, and that the .rbs is just an experiment to see how types could work out without introducing changes into the core Ruby language itself right now? #Ruby #RakuLang

github.com/ruby/rbs

So, #Haskell is better suited for general purpose than #Rust? I thought deploying something in Haskell means publishing an academic paper :-) Interesting rant about Rust, though:

chrisdone.com/posts/rust/

At first, functional options add a bit of boilerplate, but they turn out to be quite neat, especially when you have very long parameter lists that need to be made neat and tidy. #golang

www.calhoun.i...aining-in-go/

Revamping my home lab a little bit. #freebsd #bhyve #rocky #linux #vm #k3s #kubernetes #wireguard #zfs #nfs #ha #relayd #k8s #selfhosting #homelab

foo.zone/gemf...sd-part-1.gmi (Gemini)

foo.zone/gemf...d-part-1.html

Wondering to which #web #browser I should switch now personally ...

www.osnews.co...acy-and-more/

eks-node-viewer is a nifty tool, showing the compute nodes currently in use in the #EKS cluster. especially useful when dynamically allocating nodes with #karpenter or auto scaling groups.

github.com/aw...s-node-viewer

Have put more Photos on - On my static photo sites - Generated with a #bash script

irregular.ninja

In Go, passing pointers are not automatically faster than values. Pointers often force the memory to be allocated on the heap, adding GC overhad. With values, Go can determine whether to put the memory on the stack instead. But with large structs/objects (how you want to call them) or if you want to modify state, then pointers are the semantic to use. #golang

blog.boot.dev...-than-values/

Myself being part of an on-call rotations over my whole professional life, just have learned this lesson "Tell people who are new to on-call: Just have fun" :-) This is a neat blog post to read:

ntietz.com/bl...ew-to-oncall/

Feels good to code in my old love #Perl again after a while. I am implementing a log parser for generating site stats of my personal homepage! :-) @Perl

This is an interactive summary of the Go release, with a lot of examples utilising iterators in the slices and map packages. Love it! #golang

antonz.org/go-1-23/

Thats unexpected, you cant remove a NaN key from a map without clearing it! #golang

unexpected-go...aring-it.html

My second blog post about revamping my home lab a little bit just hit the net. #FreeBSD #ZFS #n100 #k8s #k3s #kubernetes

foo.zone/gemf...sd-part-2.gmi (Gemini)

foo.zone/gemf...d-part-2.html

Very insightful article about tech hiring in the age of LLMs. As an interviewer, I have experienced some of the scrnarios already first hand...

newsletter.pr...s-tech-hiring

for #bpf #ebpf performance debugging, have a look at bpftop from Netflix. A neat tool showing you the estimated CPU time and other performance statistics for all the BPF programs currently loaded into the #linux kernel. Highly recommend!

github.com/Netflix/bpftop

89 things he/she knows about Git commits is a neat list of #Git wisdoms

www.jvt.me/po...know-commits/

I found that working on multiple side projects concurrently is better than concentrating on just one. This seems inefficient at first, but whenever you tend to lose motivation, you can temporarily switch to another one with full élan. However, remember to stop starting and start finishing. This doesn't mean you should be working on 10+ (and a growing list of) side projects concurrently! Select your projects and commit to finishing them before starting the next thing. For example, my current limit of concurrent side projects is around five.

Agreed? Agreed. Besides #Ruby, I would also add #RakuLang and #Perl @Perl to the list of languages that are great for shell scripts - "Making Easy Things Easy and Hard Things Possible"

lucasoshiro.g...-shellscript/

Plan9 assembly format in Go, but wait, it's not the Operating System Plan9! #golang #rabbithole

www.osnews.co...ulations-450/

This is a neat blog post about the Helix text editor, to which I personally switched around a year ago (from NeoVim). I should blog about my experience as well. To summarize: I am using it together with the terminal multiplexer #tmux. It doesn't bother me that Helix is purely terminal-based and therefore everything has to be in the same font. #HelixEditor

jonathan-frer.../posts/helix/

This blog post is basically a rant against DataDog... Personally, I don't have much experience with DataDog (actually, I have never used it), but one reason to work with logs at my day job (with over 2,000 physical server machines) and to be cost-effective is by using dtail! #dtail #logs #logmanagement

crys.site/blo...int-the-weel/

dtail.dev

Quick trick to get Helix themes selected randomly #HelixEditor

foo.zone/gemf...ix-themes.gmi (Gemini)

foo.zone/gemf...x-themes.html

Example where complexity attacks you from behind #k8s #kubernetes #OpenAI

surfingcomple...ent-write-up/

LLMs for Ops? Summaries of logs, probabilities about correctness, auto-generating Ansible, some uses cases are there. Wouldn't trust it fully, though.

youtu.be/Woda...0egrfl5izCSQI

Excellent article about your dream Product Manager: Why every software team needs a product manager to thrive via @wallabagapp

testdouble.co...ware-delivery

I just finished reading all chapters of CPU land: ... not claiming to remember every detail, but it is a great refresher how CPUs and operating systems actually work under the hood when you execute a program, which we tend to forget in our higher abstraction world. I liked the "story" and some of the jokes along the way! Size wise, it is pretty digestable (not talking about books, but only 7 web articles/chapters)! #cpu #linux #unix #kernel #macOS

cpu.land/

Indeed, useful to know this stuff! #sre

biriukov.dev/...applications/

It's the small things, which make Unix like systems, like GNU/Linux, interesting. Didn't know about this #GNU #Tar behaviour yet:

xeiaso.net/no...pop-quiz-tar/

My New Year's resolution is not to start any new non-fiction books (or only very few) but to re-read and listen to my favorites, which I read to reflect on and see things from different perspectives. Every time you re-read a book, you gain new insights.<nil>16350

Other related posts:

2025-01-01 Posts from October to December 2024 (You are currently reading this)

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Random Helix Themes</title>

    <link href="gemini://foo.zone/gemfeed/2024-12-15-random-helix-themes.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-12-15-random-helix-themes.gmi</id>

    <updated>2024-12-15T13:55:05+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='random-helix-themes'>Random Helix Themes</h1><br />

Published at 2024-12-15T13:55:05+02:00; Last updated 2024-12-18

I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.

https://helix-editor.com/

So I put this into my zsh dotfiles (in some editor.zsh.source in my ~ directory):

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

export VISUAL=$EDITOR

export GIT_EDITOR=$EDITOR

export HELIX_CONFIG_DIR=$HOME/.config/helix

editor::helix::random_theme () {

# May add more theme search paths based on OS. This one is

# for Fedora Linux, but there is also MacOS, etc.

local -r theme_dir=/usr/share/helix/runtime/themes

if [ ! -d $theme_dir ]; then

echo "Helix theme dir $theme_dir doesnt exist"

return 1

fi

local -r config_file=$HELIX_CONFIG_DIR/config.toml

local -r random_theme="$(basename "$(ls $theme_dir </font>

| grep -v random.toml | grep .toml | sort -R </font>

| head -n 1)" | cut -d. -f1)"

sed "/^theme =/ { s/.*/theme = "$random_theme"/; }" </font>

$config_file > $config_file.tmp &&

mv $config_file.tmp $config_file

}

if [ -f $HELIX_CONFIG_DIR/config.toml ]; then

editor::helix::random_theme

fi

So every time I open a new terminal or shell, editor::helix::random_theme gets called, which randomly selects a theme from all installed ones and updates the helix config accordingly.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml

theme = "jellybeans"

[paul@earth] ~ % editor::helix::random_theme

[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml

theme = "rose_pine"

[paul@earth] ~ % editor::helix::random_theme

[paul@earth] ~ % head -n 1 ~/.config/helix/config.toml

theme = "noctis"

[paul@earth] ~ %

Update 2024-12-18: This is an improved version, which works cross platform (e.g., also on MacOS) and multiple theme directories:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

export VISUAL=$EDITOR

export GIT_EDITOR=$EDITOR

export HELIX_CONFIG_DIR=$HOME/.config/helix

editor::helix::theme::get_random () {

for dir in $(hx --health </font>

| awk '/^Runtime directories/ { print $3 }' | tr ';' ' '); do

if [ -d $dir/themes ]; then

ls $dir/themes

fi

done | grep -F .toml | sort -R | head -n 1 | cut -d. -f1

}

editor::helix::theme::set () {

local -r theme="$1"; shift

local -r config_file=$HELIX_CONFIG_DIR/config.toml

sed "/^theme =/ { s/.*/theme = "$theme"/; }" </font>

$config_file > $config_file.tmp &&

mv $config_file.tmp $config_file

}

if [ -f $HELIX_CONFIG_DIR/config.toml ]; then

editor::helix::theme::set $(editor::helix::theme::get_random)

fi

I hope you had some fun. E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</title>

    <link href="gemini://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi</id>

    <updated>2024-12-02T23:48:21+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</h1><br />

Published at 2024-12-02T23:48:21+02:00

This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

We set the stage last time; this time, we will set up the hardware for this project.

These are all the posts so far:

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)

Logo was generated by ChatGPT.

Let's continue...

Note that the OpenBSD VMs included in the f3s setup (which will be used later in this blog series for internet ingress - as you know from the first part of this blog series) are already there. These are virtual machines that I rent at OpenBSD Amsterdam and Hetzner.

https://openbsd.amsterdam

https://hetzner.cloud

This means that the FreeBSD boxes need to be covered, which will later be running k3s in Linux VMs via bhyve hypervisor.

I've been considering whether to use Raspberry Pis or look for alternatives. It turns out that complete N100-based mini-computers aren't much more expensive than Raspberry Pi 5s, and they don't require assembly. Furthermore, I like that they are AMD64 and not ARM-based, which increases compatibility with some applications (e.g., I might want to virtualize Windows (via bhyve) on one of those, though that's out of scope for this blog series).

I needed something compact, efficient, and capable enough to handle the demands of a small-scale Kubernetes cluster and preferably something I don't have to assemble a lot. After researching, I decided on the Beelink S12 Pro with Intel N100 CPUs.

Beelink Mini S12 Pro N100 official page

The Intel N100 CPUs are built on the "Alder Lake-N" architecture. These chips are designed to balance performance and energy efficiency well. With four cores, they're more than capable of running multiple containers, even with moderate workloads. Plus, they consume only around 8W of power (ok, that's more than the Pis...), keeping the electricity bill low enough and the setup quiet - perfect for 24/7 operation.

The Beelink comes with the following specs:

I bought three (3) of them for the cluster I intend to build.

Unboxing was uneventful. Every Beelink PC came with:

Overall, I love the small form factor.

I went with the tp-link mini 5-port switch, as I had a spare one available. That switch will be plugged into my wall ethernet port, which connects directly to my fiber internet router with 100 Mbit/s down and 50 Mbit/s upload speed.

First, I downloaded the boot-only ISO of the latest FreeBSD release and dumped it on a USB stick via my Fedora laptop:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

if=FreeBSD-14.1-RELEASE-amd64-bootonly.iso \

of=/dev/sda conv=sync

Next, I plugged the Beelinks (one after another) into my monitor via HDMI (the resolution of the FreeBSD text console seems strangely stretched, as I am using the LG Dual Up monitor), connected Ethernet, an external USB keyboard, and the FreeBSD USB stick, and booted the devices up. With F7, I entered the boot menu and selected the USB stick for the FreeBSD installation.

The installation was uneventful. I selected:

After doing all that three times (once for each Beelink PC), I had three ready-to-use FreeBSD boxes! Their hostnames are f0, f1 and f2!

After the first boot, I upgraded to the latest FreeBSD patch level as follows:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

root@f0:~ # freebsd-update install

root@f0:~ # freebsd-update reboot

I also added the following entries for the three FreeBSD boxes to the /etc/hosts file:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

192.168.1.130 f0 f0.lan f0.lan.buetow.org

192.168.1.131 f1 f1.lan f1.lan.buetow.org

192.168.1.132 f2 f2.lan f2.lan.buetow.org

END

You might wonder why bother using the hosts file? Why not use DNS properly? The reason is simplicity. I don't manage 100 hosts, only a few here and there. Having an OpenWRT router in my home, I could also configure everything there, but maybe I'll do that later. For now, keep it simple and straightforward.

After that, I installed the following additional packages:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Helix? It's my favourite text editor. I have nothing against vi but like hx (Helix) more!

https://helix-editor.com/

doas? It's a pretty neat (and KISS) replacement for sudo. It has far fewer features than sudo, which is supposed to make it more secure. Its origin is the OpenBSD project. For doas, I accepted the default configuration (where users in the wheel group are allowed to run commands as root):

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

https://man.openbsd.org/doas

zfs-periodic is a nifty tool for automatically creating ZFS snapshots. I decided to go with the following configuration here:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

daily_zfs_snapshot_enable="YES"

daily_zfs_snapshot_pools="zroot"

daily_zfs_snapshot_keep="7"

weekly_zfs_snapshot_enable="YES"

weekly_zfs_snapshot_pools="zroot"

weekly_zfs_snapshot_keep="5"

monthly_zfs_snapshot_enable="YES"

monthly_zfs_snapshot_pools="zroot"

monthly_zfs_snapshot_keep="6"

END

https://github.com/ross/zfs-periodic

uptimed? I like to track my uptimes. This is how I configured the daemon:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

/usr/local/mimecast/etc/uptimed.conf

root@f0:~ # hx /usr/local/mimecast/etc/uptimed.conf

In the Helix editor session, I changed LOG_MAXIMUM_ENTRIES to 0 to keep all uptime entries forever and not cut off at 50 (the default config). After that, I enabled and started uptimed:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

root@f0:~ # service uptimed start

To check the current uptime stats, I can now run uprecords:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

 <i><font color="silver">#               Uptime | System                                     Boot up</font></i>

----------------------------+---------------------------------------------------

-> 1 0 days, 00:07:34 | FreeBSD 14.1-RELEASE Mon Dec 2 12:21:44 2024

----------------------------+---------------------------------------------------

NewRec 0 days, 00:07:33 | since Mon Dec 2 12:21:44 2024

up     <font color="#000000">0</font> days, <font color="#000000">00</font>:<font color="#000000">07</font>:<font color="#000000">34</font> | since                     Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>

down 0 days, 00:00:00 | since Mon Dec 2 12:21:44 2024

%up 100.000 | since Mon Dec 2 12:21:44 2024

This is how I track the uptimes for all of my host:

Unveiling guprecords.raku: Global Uptime Records with Raku-

https://github.com/rpodgorny/uptimed

Works. Nothing eventful, really. It's a cheap Realtek chip, but it will do what it is supposed to do.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

re0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500

    options=8209b&lt;RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE&gt;

    ether e8:ff:1e:d7:1c:ac

    inet <font color="#000000">192.168</font>.<font color="#000000">1.130</font> netmask <font color="#000000">0xffffff00</font> broadcast <font color="#000000">192.168</font>.<font color="#000000">1.255</font>

    inet6 fe80::eaff:1eff:fed7:1cac%re0 prefixlen <font color="#000000">64</font> scopeid <font color="#000000">0x1</font>

    inet6 fd22:c702:acb7:<font color="#000000">0</font>:eaff:1eff:fed7:1cac prefixlen <font color="#000000">64</font> detached autoconf

    inet6 2a01:5a8:<font color="#000000">304</font>:1d5c:eaff:1eff:fed7:1cac prefixlen <font color="#000000">64</font> autoconf pltime <font color="#000000">10800</font> vltime <font color="#000000">14400</font>

    media: Ethernet autoselect (1000baseT &lt;full-duplex&gt;)

    status: active

    nd6 options=<font color="#000000">23</font>&lt;PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL&gt;

All there:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

hw.physmem: 16902905856

They work:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

dev.cpu.3.freq: 705

dev.cpu.2.freq: 705

dev.cpu.1.freq: 604

dev.cpu.0.freq: 604

With powerd running, CPU freq is dowthrottled when the box isn't jam-packed. To stress it a bit, I run ubench to see the frequencies being unthrottled again:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

paul@f0:~ % rehash # For tcsh to find the newly installed command

paul@f0:~ % ubench &

paul@f0:~ % sysctl dev.cpu | grep freq:

dev.cpu.3.freq: 2922

dev.cpu.2.freq: 2922

dev.cpu.1.freq: 2923

dev.cpu.0.freq: 2922

Idle, all three Beelinks plus the switch consumed 26.2W. But with ubench stressing all the CPUs, it went up to 38.8W.

The Beelink S12 Pro with Intel N100 CPUs checks all the boxes for a k3s project: Compact, efficient, expandable, and affordable. Its compatibility with both Linux and FreeBSD makes it versatile for other use cases, whether as part of your cluster or as a standalone system. If you’re looking for hardware that punches above its weight for Kubernetes, this little device deserves a spot on your shortlist.

To ease cable management, I need to get shorter ethernet cables. I will place the tower on my shelf, where most of the cables will be hidden (together with a UPS, which will also be added to the setup).

What will be covered in the next post of this series? Maybe ttttbhyve/Rocky Linux and WireGuard setup as described in part 1 of this series...

Other *BSD-related posts:

2016-04-09 Jails and ZFS with Puppet on FreeBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2022-10-30 Installing DTail on OpenBSD

2024-01-13 One reason why I love OpenBSD

2024-04-01 KISS high-availability with OpenBSD

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</title>

    <link href="gemini://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.gmi</id>

    <updated>2024-11-16T23:20:14+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-1-setting-the-stage'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</h1><br />

Published at 2024-11-16T23:20:14+02:00

This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.

I will post a new entry every month or so (there are too many other side projects for more frequent updates—I bet you can understand).

These are all the posts so far:

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Logo was generated by ChatGPT.

Let's begin...

My previous setup was great for learning Terraform and AWS, but it is too expensive. Costs are under control there, but only because I am shutting down all containers after use (so they are offline ninety percent of the time and still cost around $20 monthly). With the new setup, I could run all containers 24/7 at home, which would still be cheaper in terms of electricity consumption. I have a 50 MBit/s uplink (I could have more if I wanted, but it is plenty for my use case already).

From babylon5.buetow.org to .cloud

Migrating off all my containers from AWS ECS means I need a reliable and scalable environment to host my workloads. I wanted something:

This is still in progress, and I need to own the hardware. But in this first part of the blog series, I will outline what I intend to do.

The setup starts with three physical FreeBSD nodes deployed into my home LAN. On these, I'm going to run Rocky Linux virtual machines with bhyve. Why Linux VMs in FreeBSD and not Linux directly? I want to leverage the great ZFS integration in FreeBSD (among other features), and I have been using FreeBSD for a while in my home lab. And with bhyve, there is a very performant hypervisor available which makes the Linux VMs de-facto run at native speed (another use case of mine would be maybe running a Windows bhyve VM on one of the nodes - but out of scope for this blog series).

https://www.freebsd.org/

https://wiki.freebsd.org/bhyve

I selected Rocky Linux because it comes with long-term support (I don't want to upgrade the VMs every 6 months). Rocky Linux 9 will reach its end of life in 2032, which is plenty of time! Of course, there will be minor upgrades, but nothing will significantly break my setup.

https://rockylinux.org/

https://wiki.rockylinux.org/rocky/version/

Furthermore, I am already using "RHEL-family" related distros at work and Fedora on my main personal laptop. Rocky Linux belongs to the same type of Linux distribution family, so I already feel at home here. I also used Rocky 9 before I switched to AWS ECS. Now, I am switching back in one sense or another ;-)

These Linux VMs form a three-node k3s Kubernetes cluster, where my containers will reside moving forward. The 3-node k3s cluster will be highly available (in etcd mode), and all apps will probably be deployed with Helm. Prometheus will also be running in k3s, collecting time-series metrics and handling monitoring. Additionally, a private Docker registry will be deployed into the k3s cluster, where I will store some of my self-created Docker images. k3s is the perfect distribution of Kubernetes for homelabbers due to its simplicity and the inclusion of the most useful features out of the box!

https://k3s.io/

Persistent storage for the k3s cluster will be handled by highly available (HA) NFS shares backed by ZFS on the FreeBSD hosts.

On two of the three physical FreeBSD nodes, I will add a second SSD drive to each and dedicate it to a zhast ZFS pool. With HAST (FreeBSD's solution for highly available storage), this pool will be replicated at the byte level to a standby node.

A virtual IP (VIP) will point to the master node. When the master node goes down, the VIP will failover to the standby node, where the ZFS pool will be mounted. An NFS server will listen to both nodes. k3s will use the VIP to access the NFS shares.

FreeBSD Wiki: Highly Available Storage

You can think of DRBD being the Linux equivalent to FreeBSD's HAST.

All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.

All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).

https://en.wikipedia.org/wiki/WireGuard

So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.

KISS high-availability with OpenBSD

Let's Encrypt with OpenBSD and Rex

The OpenBSD setup described here already exists and is ready to use. The only thing that does not yet exist is the configuration of relayd to forward requests to k3s through the WireGuard tunnel(s).

Let's face it, backups are non-negotiable.

On the HAST master node, incremental and encrypted ZFS snapshots are created daily and automatically backed up to AWS S3 Glacier Deep Archive via CRON. I have a bunch of scripts already available, which I currently use for a similar purpose on my FreeBSD Home NAS server (an old ThinkPad T440 with an external USB drive enclosure, which I will eventually retire when the HAST setup is ready). I will copy them and slightly modify them to fit the purpose.

There's also zfstools in the ports, which helps set up an automatic snapshot regime:

https://www.freshports.org/sysutils/zfstools

The backup scripts also perform some zpool scrubbing now and then. A scrub once in a while keeps the trouble away.

Power outages are regularly in my area, so a UPS keeps the infrastructure running during short outages and protects the hardware. I'm still trying to decide which hardware to get, and I still need one, as my previous NAS is simply an older laptop that already has a battery for power outages. However, there are plenty of options to choose from. My main criterion is that the UPS should be silent, as the whole setup will be installed in an upper shelf unit in my daughter's room. ;-)

Robust monitoring is vital to any infrastructure, especially one as distributed as mine. I've thought about a setup that ensures I'll always be aware of what's happening in my environment.

Inside the k3s cluster, Prometheus will be deployed to handle metrics collection. It will be configured to scrape data from my Kubernetes workloads, nodes, and any services I monitor. Prometheus also integrates with Alertmanager to generate alerts based on predefined thresholds or conditions.

https://prometheus.io

For visualization, Grafana will be deployed alongside Prometheus. Grafana lets me build dynamic, customizable dashboards that provide a real-time view of everything from resource utilization to application performance. Whether it's keeping track of CPU load, memory usage, or the health of Kubernetes pods, Grafana has it covered. This will also make troubleshooting easier, as I can quickly pinpoint where issues are arising.

https://grafana.com

Alerts generated by Prometheus are forwarded to Alertmanager, which I will configure to work with Gogios, a lightweight monitoring and alerting system I wrote myself. Gogios runs on one of my OpenBSD VMs. At regular intervals, Gogios scrapes the alerts generated in the k3s cluster and notifies me via Email.

KISS server monitoring with Gogios

Ironically, I implemented Gogios to avoid using more complex alerting systems like Prometheus, but here we go—it integrates well now.

This setup may be just the beginning. Some ideas I'm thinking about for the future:

For now, though, I'm focused on completing the migration from AWS ECS and getting all my Docker containers running smoothly in k3s.

What's your take on self-hosting? Are you planning to move away from managed cloud services? Stay tuned for the second part of this series, where I will likely write about the hardware and the OS setups.

Read the next post of this series:

f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Other *BSD-related posts:

2016-04-09 Jails and ZFS with Puppet on FreeBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2022-10-30 Installing DTail on OpenBSD

2024-01-13 One reason why I love OpenBSD

2024-04-01 KISS high-availability with OpenBSD

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'Staff Engineer' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.gmi</id>

    <updated>2024-10-24T20:57:44+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'Staff Engineer' by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='staff-engineer-book-notes'>"Staff Engineer" book notes</h1><br />

Published at 2024-10-24T20:57:44+03:00

These are my personal takeaways after reading "Staff Engineer" by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

Larson breaks down the role of a Staff Engineer into four main archetypes, which can help frame how you approach the role:

As a Staff Engineer, influence is often more important than formal authority. You’ll rarely have direct control over teams or projects but will need to drive outcomes by influencing peers, other teams, and leadership. It’s about understanding how to persuade, align, and mentor others to achieve technical outcomes.

Staff Engineers often need to maintain a breadth of knowledge across various areas while maintaining depth in a few. This can mean keeping a high-level understanding of several domains (e.g., infrastructure, security, product development) but being able to dive deep when needed in certain core areas.

An important part of a Staff Engineer’s role is mentoring others, not just in technical matters but in career development as well. Sponsorship goes a step beyond mentorship, where you actively advocate for others, create opportunities for them, and push them toward growth.

Success as a Staff Engineer often depends on managing up (influencing leadership and setting expectations) and managing across (working effectively with peers and other teams). This is often tied to communication skills, the ability to advocate for technical needs, and fostering alignment across departments or organizations.

While Senior Engineers may focus on execution, Staff Engineers are expected to think strategically, making decisions that will affect the company or product months or years down the line. This means balancing short-term execution needs with long-term architectural decisions, which may require challenging short-term pressures.

The higher you go in engineering roles, the more soft skills, particularly emotional intelligence (EQ), come into play. Building relationships, resolving conflicts, and understanding the broader emotional dynamics of the team and organization become key parts of your role.

Staff Engineers are often placed in situations with high ambiguity—whether in defining the problem space, coming up with a solution, or aligning stakeholders. The ability to operate effectively in these unclear areas is critical to success.

Much of the work done by Staff Engineers is invisible. Solving complex problems, creating alignment, or influencing decisions doesn’t always result in tangible code, but it can have a massive impact. Larson emphasizes that part of the role is being comfortable with this type of invisible contribution.

At the Staff Engineer level, you must scale your impact beyond direct contribution. This can involve improving documentation, developing repeatable processes, mentoring others, or automating parts of the workflow. The idea is to enable teams and individuals to be more effective, even when you’re not directly involved.

Larson touches on how different companies have varying definitions of "Staff Engineer," and titles don’t always correlate directly with responsibility or skill. He emphasizes the importance of focusing more on the work you're doing and the impact you're having, rather than the title itself.

These additional points reflect more of the strategic, interpersonal, and leadership aspects that go beyond the technical expertise expected at this level. The role of a Staff Engineer is often about balancing high-level strategy with technical execution, while influencing teams and projects in a sustainable, long-term way.

It's important to know what work or which role most energizes you. A Staff engineer is not a more senior engineer. A Staff engineer also fits into another archetype.

As a staff engineer, you are always expected to go beyond your comfort zone and learn new things.

Your job sometimes will feel like an SEM and sometimes strangely similar to your senior roles.

A Staff engineer is, like a Manager, a leader. However, being a Manager is a specific job. Leaders can apply to any job, especially to Staff engineers.

The more senior you become, the more responsibility you will have to cope with them in less time. Balance your speed of progress with your personal life, don't work late hours and don't skip these personal care events.

Do fewer things but do them better. Everything done will accelerate the organization. Everything else will drag it down—quality over quantity.

Don't work at ten things and progress slowly; focus on one thing and finish it.

Only spend some of the time firefighting. Have time for deep thinking. Only deep think some of the time. Otherwise, you lose touch with reality.

Sebactical: Take at least six months. Otherwise, it won't be as restored.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes (You are currently reading this)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Gemtexter 3.0.0 - Let's Gemtext again⁴</title>

    <link href="gemini://foo.zone/gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.gmi</id>

    <updated>2024-10-01T21:46:26+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I proudly announce that I've released Gemtexter version `3.0.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='gemtexter-300---let-s-gemtext-again'>Gemtexter 3.0.0 - Let&#39;s Gemtext again⁴</h1><br />

Published at 2024-10-01T21:46:26+03:00

I proudly announce that I've released Gemtexter version 3.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

https://codeberg.org/snonux/gemtexter

-=[ typewriters ]=- 1/98

                                  .-------.

   .-------.                     _|~~ ~~  |_

  _|~~ ~~  |_       .-------.  =(_|_______|_)

=(_|_______|_)=    _|~~ ~~  |_   |:::::::::|    .-------.

  |:::::::::|    =(_|_______|_)  |:::::::[]|   _|~~ ~~  |_

  |:::::::[]|      |:::::::::|   |o=======.| =(_|_______|_)

  |o=======.|      |:::::::[]|   `"""""""""`   |:::::::::|

jgs """"""""" |o=======.| |:::::::[]|

mod. by Paul Buetow """"""""" |o=======.|

                                               `"""""""""`

This project is too complex for a Bash script. Writing it in Bash was to try out how maintainable a "larger" Bash script could be. It's still pretty maintainable and helps me try new Bash tricks here and then!

Let's list what's new!

The last version of Gemtexter introduced the HTML exact variant, which wasn't enabled by default. This version of Gemtexter removes the previous (inexact) variant and makes the exact variant the default. This is a breaking change, which is why there is a major version bump of Gemtexter. Here is a reminder of what the exact variant was:

Gemtexter is there to convert your Gemini Capsule into other formats, such as HTML and Markdown. An HTML exact variant can now be enabled in the gemtexter.conf by adding the line declare -rx HTML_VARIANT=exact. The HTML/CSS output changed to reflect a more exact Gemtext appearance and to respect the same spacing as you would see in the Geminispace.

Just add...

<< template::inline::toc

...into a Gemtexter template file and Gemtexter will automatically generate a table of contents for the page based on the headings (see this page's ToC for example). The ToC will also have links to the relevant sections in HTML and Markdown output. The Gemtext format does not support links, so the ToC will simply be displayed as a bullet list.

It was always possible to customize the style of a Gemtexter's resulting HTML page, but all the config options were scattered across multiple files. Now, the CSS style, web fonts, etc., are all configurable via themes.

Simply configure HTML_THEME_DIR in the gemtexter.conf file to the corresponding directory. For example:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

To customize the theme or create your own, simply copy the theme directory and modify it as needed. This makes it also much easier to switch between layouts.

The default theme is now "back to the basics" and does not utilize any web fonts. The previous themes are still part of the release and can be easily configured. These are currently the future and business themes. You can check them out from the themes directory.

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-24 Welcome to the Geminispace

2021-06-05 Gemtexter - One Bash script to rule it all

2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again²

2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴ (You are currently reading this)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers</title>

    <link href="gemini://foo.zone/gemfeed/2024-09-07-site-reliability-engineering-part-4.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-09-07-site-reliability-engineering-part-4.gmi</id>

    <updated>2024-09-07T16:27:58+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='site-reliability-engineering---part-4-onboarding-for-on-call-engineers'>Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers</h1><br />

Published at 2024-09-07T16:27:58+03:00

Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers (You are currently reading this)

   __..._   _...__

..-" Y "-.

\ Once upon | /

\ a time..| //

\\ | ///

\\ ..---.|.---.. ///

jgs \_..---.Y.---.._//

This time, I want to share some tips on how to onboard software engineers, QA engineers, and Site Reliability Engineers (SREs) to the primary on-call rotation. Traditionally, onboarding might take half a year (depending on the complexity of the infrastructure), but with a bit of strategy and structured sessions, we've managed to reduce it to just six weeks per person. Let's dive in!

First things first, let's talk about Tier-1. This is where the magic begins. Tier-1 covers over 80% of the common on-call cases and is the perfect breeding ground for new on-call engineers to get their feet wet. It's designed to be manageable training ground.

So how did we cut down the onboarding time so drastically? Here’s the breakdown of our process:

Knowledge Transfer (KT) Sessions: We kicked things off with more than 10 KT sessions, complete with video recordings. These sessions are comprehensive and cover everything from the basics to some more advanced topics. The recorded sessions mean that new engineers can revisit them anytime they need a refresher.

Shadowing Sessions: Each new engineer undergoes two on-call week shadowing sessions. This hands-on experience is invaluable. They get to see real-time incident handling and resolution, gaining practical knowledge that's hard to get from just reading docs.

Comprehensive Runbooks: We created 64 runbooks (by the time writing this probably more than 100) that are composable like Lego bricks. Each runbook covers a specific scenario and guides the engineer step-by-step to resolution. Pairing these with monitoring alerts linked directly to Confluence docs, and from there to the respective runbooks, ensures every alert can be navigated with ease (well, there are always exceptions to the rule...).

Self-Sufficiency & Confidence Building: With all these resources at their fingertips, our on-call engineers become self-sufficient for most of the common issues they'll face (new starters can now handle around 80% of the most common issue after 6 weeks they had joined the company). This boosts their confidence and ensures they can handle Tier-1 incidents independently.

Documentation and Feedback Loop: Continuous improvement is key. We regularly update our documentation based on feedback from the engineers. This makes our process even more robust and user-friendly.

Let’s briefly touch on the Tier levels:

From Tier-1, engineers naturally grow into Tier-2 and beyond. The structured training and gradual increase in complexity help ensure a smooth transition as they gain experience and confidence. The key here is that engineers stay curous and engaged in the on-call, so that they always keep learning.

It is important that runbooks are not a "project to be finished"; runbooks have to be maintained and updated over time. Sections may change, new runbooks need to be added, and old ones can be deleted. So the acceptance criteria of an on-call shift would not just be reacting to alerts and incidents, but also reviewing and updating the current runbooks.

By structuring the onboarding process with KT sessions, shadowing, comprehensive runbooks, and a feedback loop, we've been able to fast-track the process from six months to just six weeks. This not only prepares our engineers for the on-call rotation quicker but also ensures they're confident and capable when handling incidents.

If you're looking to optimize your on-call onboarding process, these strategies could be your ticket to a more efficient and effective transition. Happy on-calling!

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Projects I financially support</title>

    <link href="gemini://foo.zone/gemfeed/2024-09-07-projects-i-support.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-09-07-projects-i-support.gmi</id>

    <updated>2024-09-07T16:04:19+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This is the list of projects and initiatives I support/sponsor. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='projects-i-financially-support'>Projects I financially support</h1><br />

Published at 2024-09-07T16:04:19+03:00

This is the list of projects and initiatives I support/sponsor.

||====================================================================||

||//$\///////////////////////////////$\||

||(100)==================| FEDERAL SPONSOR NOTE |================(100)||

||\$// ~ '------========--------' \$//||

||<< / /$\ // ____ \ \ >>||

||>>| 12 //L\ // ///..) \ L38036133B 12 |<<||

||<<| \ // || <|| >\ || |>>||

||>>| $/ || $$ --/ || One Hundred |<<||

||<<| L38036133B \ |_/ // series |>>||

||>>| 12 \/____// 1989 |<<||

||<<\ Open Source /Franklin__ Supporting />>||

||//$\ ~| SPONSORING AND FUNDING |~ /$\||

||(100)=================== AWESOME OPEN SOURCE =================(100)||

||\$///////////////////////////////\$//||

||====================================================================||

Sponsoring free and open-source projects, even for personal use, is important to ensure the sustainability, security, and continuous improvement of the software. It supports developers who often maintain these projects without compensation, helping them provide updates, new features, and security patches. By contributing, you recognize their efforts, foster a culture of innovation, and benefit from perks like early access or support, all while ensuring the long-term viability of the tools you rely on.

Albeit I am not putting a lot of money into my sponsoring efforts, it still helps the open-source maintainers because the more little sponsors there are, the higher the total sum.

I am a silver Patreon member of OSnews. I have been following this site since my student years. It's always been a great source of independent and slightly alternative IT news.

https://osnews.com

I am a Patreon of the Cup o' Go Podcast. The podcast helps me stay updated with the Go community for around 15 minutes per week. I am not a full-time software developer, but my long-term ambition is to become better in Go every week by working on personal projects and tools for work.

https://cupogo.dev

Codeberg e.V. is a nonprofit organization that provides online resources for software development and collaboration. I am a user and a supporting member, paying an annual membership of €24. I didn't have to pay that membership fee, as Codeberg offers all the services I use for free.

https://codeberg.org

https://codeberg.org/snonux - My Codeberg page

GrapheneOS is an open-source project that improves Android's privacy and security with sandboxing, exploit mitigations, and a permission model. It does not include Google apps or services but offers a sandboxed Google Play compatibility layer and its own apps and services.

I've made a one-off €100 donation because I really like this, and I run GrapheneOS on my personal Phone as my main daily driver.

https://grapheneos.org/

Why GrapheneOS Rox

AnkiDroid is an app that lets you learn flashcards efficiently with spaced repetition. It is compatible with Anki software and supports various flashcard content, syncing, statistics, and more.

I've been learning vocabulary with this free app, and it is, in my opinion, the best flashcard app I know. I've made a 20$ one-off donation to this project.

https://opencollective.com/ankidroid

The OpenBSD project produces a FREE, multi-platform 4.4BSD-based UNIX-like operating system. Our efforts emphasize portability, standardization, correctness, proactive security and integrated cryptography. As an example of the effect OpenBSD has, the popular OpenSSH software comes from OpenBSD. OpenBSD is freely available from their download sites.

I implicitly support the OpenBSD project through a VM I have rented at OpenBSD Amsterdam. They donate €10 per VM and €15 per VM for every renewal to the OpenBSD Foundation, with dedicated servers running vmm(4)/vmd(8) to host opinionated VMs.

https://www.OpenBSD.org

https://OpenBSD.Amsterdam

I am not directly funding this project, but I am a very happy paying customer, and I am listing it here as an alternative to big tech if you don't want to run your own mail infrastructure. I am listing ProtonMail here as it is a non-profit organization, and I want to emphasize the importance of considering alternatives to big tech.

https://proton.me/

This is the alternative to Audible if you are into audiobooks (like I am). For every book or every month of membership, I am also supporting a local bookstore I selected. Their catalog is not as large as Audible's, but it's still pretty decent.

Libro.fm began as a conversation among friends at Third Place Books, a local bookstore in Seattle, Washington, about the growing popularity of audiobooks and the lack of a way for readers to purchase them from independent bookstores. Flash forward, and Libro.fm was founded in 2014.

https://libro.fm

E-mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Typing `127.1` words per minute (`>100wpm average`)</title>

    <link href="gemini://foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.gmi</id>

    <updated>2024-08-05T17:39:30+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>After work one day, I noticed some discomfort in my right wrist. Upon research, it appeared to be a mild case of Repetitive Strain Injury (RSI). Initially, I thought that this would go away after a while, but after a week it became even worse. This led me to consider potential causes such as poor posture or keyboard use habits. As an enthusiast of keyboards, I experimented with ergonomic concave ortholinear split keyboards. Wait, what?...</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='typing-1271-words-per-minute-100wpm-average'>Typing <span class='inlinecode'>127.1</span> words per minute (<span class='inlinecode'>&gt;100wpm average</span>)</h1><br />

Published at 2024-08-05T17:39:30+03:00

,---,---,---,---,---,---,---,---,---,---,---,---,---,-------,

|1/2| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0 | + | ' | <- |

|---'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-----|

| ->| | Q | W | E | R | T | Y | U | I | O | P | ] | ^ | |

|-----',--',--',--',--',--',--',--',--',--',--',--',--'| |

| Caps | A | S | D | F | G | H | J | K | L | \ | [ | * | |

|----,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'-,-'---'----|

| | < | Z | X | C | V | B | N | M | , | . | - | |

|----'-,-',--'--,'---'---'---'---'---'---'-,-'---',--,------|

| ctrl | | alt | |altgr | | ctrl |

'------' '-----'--------------------------'------' '------'

  Nieminen Mika	

After work one day, I noticed some discomfort in my right wrist. Upon research, it appeared to be a mild case of Repetitive Strain Injury (RSI). Initially, I thought that this would go away after a while, but after a week it became even worse. This led me to consider potential causes such as poor posture or keyboard use habits. As an enthusiast of keyboards, I experimented with ergonomic concave ortholinear split keyboards. Wait, what?...

After discovering ThePrimagen (I found him long ago, but I never bothered buying the same keyboard he is on) on YouTube and reading/watching a couple of reviews, I thought that as a computer professional, the equipment could be expensive anyway (laptop, adjustable desk, comfortable chair), so why not invest a bit more into the keyboard? I purchased myself the Kinesis Advantage360 Professional keyboard.

For an in-depth review, have a look at this great article:

Review of the Kinesis Advantage360 Professional keyboard

Overall, the keyboard feels excellent quality and robust. It has got some weight to it. Because of that, it is not ideally suited for travel, though. But I have a different keyboard to solve this (see later in this post). Overall, I love how it is built and how it feels.

Despite encountering concerns about Bluetooth connectivity issues with the Kinesis keyboard during my research, I purchased one anyway as I intended to use it only via USB. However, I discovered that the firmware updates available afterwards had addressed these reported Bluetooth issues, and as a result, I did not experience any difficulties with the Bluetooth functionality. This positive outcome allowed me to enjoy using the keyboard also wirelessly.

Many voices on the internet seem to dislike the Gateron Brown switches, the only official choice for non-clicky tactile switches in the Kinesis, so I was also a bit concerned. I almost went with Cherry MX Browns for my Kinesis (a custom build from a 3rd party provider that is partnershipping with Kinesis). Still, I decided on Gateron Browns to try different switches than the Cherry MX Browns I already have on my ZSA Moonlander keyboard (another ortho-linear split keyboard, but without a concave keycap layout).

At first, I was disappointed by the Gaterons, as they initially felt a bit meshy compared to the Cherries. Still, over the weeks I grew to prefer them because of their smoothness. Over time, the tactile bumps also became more noticeable (as my perception of them improved). Because of their less pronounced tactile feedback, the Gaterons are less tiring for long typing sessions and better suited for a relaxed typing experience.

So, the Cherry MX feel sharper but are more tiring in the long run, and the Gaterons are easier to write on and the tactile Feedback is slightly less pronounced.

If you ever purchase a Kinesis keyboard, go with the PCB keycaps. They upgrade the typing experience a lot. The only thing you will lose is that the backlighting won't shine through them. But that is a reasonable tradeoff. When do I need backlighting? I am supposed to look at the screen and not the keyboard while typing.

I went with the blank keycaps, by the way.

There is no official keymap editor. You have to edit a configuration file manually, build the firmware from scratch, and upload the firmware with the new keymap to both keyboard halves. The Professional version of his keyboard, by the way, runs on the ZMK open-source firmware.

Many users find the need for an easy-to-use keymap editor an issue. But this is the Pro model. You can also go with the non-Pro, which runs on non-open-source firmware and has no Bluetooth (it must be operated entirely on USB).

There is a 3rd party solution which is supposed to configure the keymap for the Professional model as bliss, but I have never used it. As a part-time programmer and full-time Site Reliability Engineer, I am okay configuring the keymap in my text editor and building it in a local docker container. This is one of the standard ways of doing it here. You could also use a GitHub pipeline for the firmware build, but I prefer building it locally on my machine. This all seems natural to me, but this may be an issue for "the average Joe" user.

I didn't measure the usual words per minute (wpm) on my previous keyboard, the ZSA Moonlander, but I guess that it was around 40-50wpm. Once the Kinesis arrived, I started practising. The experience was quite different due to the concave keycaps, so I barely managed 10wpm on the first day.

I quickly noticed that I could not continue using the freestyle 6-finger typing system I was used to on my Moonlander or any previous keyboards I worked with. I learned ten-finger touch typing from scratch to be more efficient with the Kinesis keyboard. The keyboard forces you to embrace touch typing.

Sometimes, there were brain farts, and I couldn't type at all. The trick was not to freak out about it, but to move on. If your average goes down a bit for a day, it doesn't matter; the long-term trend over several days and weeks matters, not the one-off wpm high score.

Although my wrist pain seemed to go away aftre the first week of using the Kinesis, my fingers became tired of adjusting to the new way of typing. My hands were stiff, as if I had been training for the Olympics. Only after three weeks did I start to feel comfortable with it. If it weren't for the comments I read online, I would have sent it back after week 2.

I also had a problem with the left pinky finger, where I could not comfortably reach the p key. This involved moving the whole hand. An easy fix was to swap p with ; on the keyboard layout.

As I was going to learn 10-finger touch typing from scratch, I also played with the thought of switching from the Qwerty to the Dvorak or Colemak keymap, but after reading some comments on the internet, I decided against it:

One of the most influential tools in my touch typing journey has been keybr.com. This site/app helped me learn 10-finger touch typing, and I practice daily for 30 minutes (in the first two weeks, up to an hour every day). The key is persistence and focus on technique rather than speed; the latter naturally improves with regular practice. Precision matters, too, so I always correct my errors using the backspace key.

https://keybr.com

I also used a command-line tool called tt, which is written in Go. It has a feature that I found very helpful: the ability to practice typing by piping custom text into it. Additionally, I appreciated its customization options, such as choosing a colour theme and specifying how statistics are displayed.

https://github.com/lemnos/tt

I wrote myself a small Ruby script that would randomly select a paragraph from one of my eBooks or book notes and pipe it to tt. This helped me remember some of the books I read and also practice touch typing.

Overall, I trained for around 4 months in more than 5,000 sessions. My top speed in a session was 127.1wpm (up from barely 10wpm at the beginning).

My overall average speed over those 5,000 sessions was 80wpm. The average speed over the last week was over 100wpm. The green line represents the wpm average (increasing trend), the purple line represents the number of keys in the practices (not much movement there, as all keys are unlocked), and the red line represents the average typing accuracy.

Around the middle, you see a break-in of the wpm average value. This was where I swapped the p and ; keys, but after some retraining, I came back to the previous level and beyond.

These are some tips and tricks I learned along the way to improve my typing speed:

It's easy to get cramped when trying to hit this new wpm mark, but this is just holding you back. Relax and type at a natural pace. Now I also understand why my Katate Sensei back in London kept screaming "RELAAAX" at me during practice.... It didn't help much back then, though, as it is difficult to relax while someone screams at you!

This goes with the previous point. Instead of trying to speed through sessions as quickly as possible, slow down and try to type the words correctly—so don't rush it. If you aren't fast yet, the reason is that your brain hasn't trained enough. It will come over time, and you will be faster.

A trick to getting faster is to type by word and pause between each word so you learn the words by chords. From 80wpm and beyond, this makes a real difference.

I included 10% punctuation and 20% capital letters in my keybr.com practice sessions to simulate real typing conditions, which improved my overall working efficiency. I guess I would have gone to 120wpm in average if I didn't include this options...

Reverse shifting aka left-right shifting is to...

This makes using the shift key a blaze.

Listening to music helps me enter a flow state during practice sessions, which makes typing training a bit addictive (which is good, or isn't it?).

There's a setting on keybr.com that makes it so that every word is always repeated, having you type every word twice in a row. I liked this feature very much, and I think it also helped to improve my practice.

Apparently, if you want to type fast, avoid using the same finger for two consecutive keystrokes. This means you don't always need to use the same finger for the same keys.

However, there are no hard and fast rules. Thus, everyone develops their system for typing word combinations. An exception would be if you are typing the very same letter in a row (e.g., t in letter)—here, you are using the same finger for both ts.

You can't reach your average typing speed first ting the morning. It would help if you warmed up before the exercise or practice later during the day. Also, some days are good, others not so, e.g., after a bad night's sleep. What matters is the mid- and long-term trend, not the fluctuations here, though.

As mentioned, the Kinesis is a great keyboard, but it is not meant for travel.

I guess keyboards will always be my expensive hobby, so I also purchased another ergonomic, ortho-linear, concave split keyboard, the Glove80 (with the Red Pro low-profile switches). This keyboard is much lighter and, in my opinion, much better suited for travel than the Kinesis. It also comes with a great travel case.

Here is a photo of me using it with my Surface Go 2 (it runs Linux, by the way) while waiting for the baggage drop at the airport:

For everyday work, I prefer the tactile Browns on the Kinesis over the Red Pro I have on the Glove80 (normal profile vs. low profile). The Kinesis feels much more premium, whereas the Glove80 is much lighter and easier to store away in a rucksack (the official travel case is a bit bulky, so I wrapped it simply in bubble plastic).

The F-key row is odd at the Glove80. I would have preferred more keys on the sides like the Kinesis, and I use them for [] {} (), which is pretty handy there. However, I like the thumb cluster of the Glove80 more than the one on the Kinesis.

The good thing is that I can switch between both keyboards instantly without retraining my typing memories. I've configured (as much as possible) the same keymaps on both my Kinesis and Glove80, making it easy to switch between them at any occasion.

Interested in the Glove80? I suggest also reading this review:

Review of the Glove80 keyboard

As I mentioned, keyboards will remain an expensive hobby of mine. I don't regret anything here, though. After all, I use keyboards at my day job. I've ordered a Kinesis custom build with the Gateron Kangaroo switches, and I'm excited to see how that compares to my current setup. I'm still deciding whether to keep my Gateron Brown-equipped Kinesis as a secondary keyboard or possibly leave it at my in-laws for use when visiting or to sell it.

When I traveled with the Glove80 for work to the London office, a colleague stared at my keyboard and made jokes that it might be broken (split into two halves). But other than that...

Ten-finger touch typing has improved my efficiency and has become a rewarding discipline. Whether it's the keyboards I use, the tools I practice with, or the techniques I've adopted, each step has been a learning experience. I hope sharing my journey provides valuable insights and inspiration for anyone looking to improve their touch typing skills.

I also accidentally started using a 10-finger-like system (maybe still 6 fingers, but better than before) on my regular laptop keyboard. I could be more efficient on the laptop keyboard. The form is different there (not ortholinear, not concave keycaps, etc.), but my typing has improved there too (even if it is only by a little bit).

I don't want to return to a non-concave keyboard as my default. I will use other keyboards still once in a while but only for short periods or when I have to (e.g. travelling with my Laptop and when there is no space to put an external keyboard)

Learning to touch type has been an eye-opening experience for me, not just for work but also for personal projects. Now, writing documentation is so much fun; who could believe that? Furthermore, working with Slack (communicating with colleagues) is more fun now as well.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'The Stoic Challenge' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2024-07-07-the-stoic-challenge-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-07-07-the-stoic-challenge-book-notes.gmi</id>

    <updated>2024-07-07T12:46:55+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'The Stoic Challenge:  A Philosopher's Guide to Becoming Tougher, Calmer, and More Resilient' by William B. Irvine. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='the-stoic-challenge-book-notes'>"The Stoic Challenge" book notes</h1><br />

Published at 2024-07-07T12:46:55+03:00

These are my personal takeaways after reading "The Stoic Challenge: A Philosopher's Guide to Becoming Tougher, Calmer, and More Resilient" by William B. Irvine.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

Gods set you up for a challenge to see how resilient you are. Is getting angry worth the price? If you stay calm then you can find the optimal workaround for the obstacle. Stay calm even with big setbacks. Practice minimalism of negative emotions.

Put a positive spin on everything. What should you do if someone wrong you? Don't get angry, there is no point in that, it just makes you suffer. Do the best what you got now and keep calm and carry on. A resilient person will refuse to play the role of a victim. You can develop the setback response skills. Turn a setback. e.g. a handycap, into a personal triumph.

It is not the things done to you or happen to you what matters but how you take the things and react to these things.

Don't row against the other boats but against your own lazy bill. It doesn't matter if you are first or last, as long as you defeat your lazy self.

Stoics are thankful that they are mortal. As then you can get reminded of how great it is to be alive at all. In dying we are more alive we have ever been as every thing you do could be the last time you do it. Rather than fighting your death you should embrace it if there are no workarounds. Embrace a good death.

It is easy what we have to take for granted.

Take setbacks as a challenge. Also take it with some humor.

What would the stoic god's do next? This is just a test strategy by them. Don't be frustrated at all but be astonished of what comes next. Thank the stoic gods of testing you. This is comfort zone extension of the stoics aka toughness Training.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes (You are currently reading this)

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Random Weird Things</title>

    <link href="gemini://foo.zone/gemfeed/2024-07-05-random-weird-things.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-07-05-random-weird-things.gmi</id>

    <updated>2024-07-05T10:59:59+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. As a start, here are ten of them.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='random-weird-things'>Random Weird Things</h1><br />

Published at 2024-07-05T10:59:59+03:00

Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. As a start, here are ten of them.

	       /\_/\

WHOA!! ( o.o )

	       &gt; ^ &lt;

	      /  -  \

	    /        \

	   /______\  \

Run traceroute to get the poem (or song).

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

traceroute to bad.horse (162.252.205.157), 30 hops max, 60 byte packets

1 dsldevice.lan (192.168.1.1) 5.712 ms 5.800 ms 6.466 ms

2 87-243-116-2.ip.btc-net.bg (87.243.116.2) 8.017 ms 7.506 ms 8.432 ms

3 * * *

4 * * *

5 xe-1-2-0.mpr1.fra4.de.above.net (80.81.194.26) 39.952 ms 40.155 ms 40.139 ms

6 ae12.cs1.fra6.de.eth.zayo.com (64.125.26.172) 128.014 ms * *

7 * * *

8 * * *

9 ae10.cs1.lhr15.uk.eth.zayo.com (64.125.29.17) 120.625 ms 121.117 ms 121.050 ms

10 * * *

11 * * *

12 * * *

13 ae5.mpr1.tor3.ca.zip.zayo.com (64.125.23.118) 192.605 ms 205.741 ms 203.607 ms

14 64.124.217.237.IDIA-265104-ZYO.zip.zayo.com (64.124.217.237) 204.673 ms 134.674 ms 131.442 ms

15 * * *

16 67.223.96.90 (67.223.96.90) 128.245 ms 127.844 ms 127.843 ms

17 bad.horse (162.252.205.130) 128.194 ms 122.854 ms 121.786 ms

18 bad.horse (162.252.205.131) 128.831 ms 128.341 ms 186.559 ms

19 bad.horse (162.252.205.132) 185.716 ms 180.121 ms 180.042 ms

20 bad.horse (162.252.205.133) 203.170 ms 203.076 ms 203.168 ms

21 he.rides.across.the.nation (162.252.205.134) 203.115 ms 141.830 ms 141.799 ms

22 the.thoroughbred.of.sin (162.252.205.135) 147.965 ms 148.230 ms 170.478 ms

23 he.got.the.application (162.252.205.136) 165.161 ms 164.939 ms 159.085 ms

24 that.you.just.sent.in (162.252.205.137) 162.310 ms 158.569 ms 158.896 ms

25 it.needs.evaluation (162.252.205.138) 162.927 ms 163.046 ms 163.085 ms

26 so.let.the.games.begin (162.252.205.139) 233.363 ms 233.545 ms 233.317 ms

27 a.heinous.crime (162.252.205.140) 237.745 ms 233.614 ms 233.740 ms

28 a.show.of.force (162.252.205.141) 237.974 ms 176.085 ms 175.927 ms

29 a.murder.would.be.nice.of.course (162.252.205.142) 181.838 ms 181.858 ms 182.059 ms

30 bad.horse (162.252.205.143) 187.731 ms 187.416 ms 187.532 ms

Fancy watching Star Wars Episode IV in ASCII? Head to the ASCII cinema:

https://asciinema.org/a/569727

Netflix has got the Hello World application run in production 😱

By the time this is posted, it seems that Netflix has taken it offline... I should have created a screenshot!

In C, you can index an array like this: array[i] (not surprising). But this works as well and is valid C code: i[array], 🤯 It's because after the spec A[B] is equivalent to *(A + B) and the ordering doesn't matter for the + operator. All 3 loops are producing the same output. Would be funny to use i[array] in a merge request of some code base on April Fool's day!

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

int main(void) {

int array[5] = { 1, 2, 3, 4, 5 };

for (int i = 0; i < 5; i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, array[i]);

for (int i = 0; i < 5; i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, i[array]);

for (int i = 0; i < 5; i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, *(i + array));

}

In C you can prefix variables with $! E.g. the following is valid C code 🫠:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

int main(void) {

int $array[5] = { 1, 2, 3, 4, 5 };

for (int $i = 0; $i < 5; $i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, $array[$i]);

for (int $i = 0; $i < 5; $i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, $i[$array]);

for (int $i = 0; $i < 5; $i++)

printf(<font color="#808080">"%d</font>\n<font color="#808080">"</font>, *($i + $array));

}

Experienced software developers are aware that scripting languages like Python, Perl, Ruby, and JavaScript support object-oriented programming (OOP) concepts such as classes and inheritance. However, many might be surprised to learn that the latest version of the Korn shell (Version 93t+) also supports OOP. In ksh93, OOP is implemented using user-defined types:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

typeset -T Point_t=(

integer -h <font color="#808080">'x coordinate'</font> x=<font color="#000000">0</font>

integer -h <font color="#808080">'y coordinate'</font> y=<font color="#000000">0</font>

<b><u><font color="#000000">typeset</font></u></b> -h <font color="#808080">'point color'</font>  color=<font color="#808080">"red"</font>

function getcolor {

    print -r ${_.color}

}

function setcolor {

    _.color=$1

}

setxy() {

    _.x=$1; _.y=$2

}

getxy() {

    print -r <font color="#808080">"(${_.x},${_.y})"</font>

}

)

Point_t point

echo "Initial coordinates are (${point.x},${point.y}). Color is ${point.color}"

point.setxy 5 6

point.setcolor blue

echo "New coordinates are ${point.getxy}. Color is ${point.getcolor}"

exit 0

Using types to create object oriented Korn shell 93 scripts

There is no pointer arithmetic in Go like in C, but it is still possible to do some brain teasers with pointers 😧:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

import "fmt"

func main() {

<b><u><font color="#000000">var</font></u></b> i int

f := <b><u><font color="#000000">func</font></u></b>() *int {

	<b><u><font color="#000000">return</font></u></b> &amp;i

}

*f()++

fmt.Println(i)

}

Go playground

Defined in 1998 as one of the IETF's traditional April Fools' jokes (RFC 2324), the Hyper Text Coffee Pot Control Protocol specifies an HTTP status code that is not intended for actual HTTP server implementation. According to the RFC, this code should be returned by teapots when asked to brew coffee. This status code also serves as an Easter egg on some websites, such as Google.com's "I'm a teapot" feature. Occasionally, it is used to respond to a blocked request, even though the more appropriate response would be the 403 Forbidden status code.

https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#418

Many know of jq, the handy small tool and swiss army knife for JSON parsing.

https://github.com/jqlang/jq

What many don't know that jq is actually a full blown functional programming language jqlang, have a look at the language description:

https://github.com/jqlang/jq/wiki/jq-Language-Description

As a matter of fact, the language is so powerful, that there exists an implementation of jq in jq itself:

https://github.com/wader/jqjq

Here some snipped from jqjq, to get a feel of jqlang:

def _token:

def _re($re; f):

  ( . as {$remain, $string_stack}

  | $remain

  | match($re; "m").string

  | f as $token

  | { result: ($token | del(.string_stack))

    , remain: $remain[length:]

    , string_stack:

        ( if $token.string_stack == null then $string_stack

          else $token.string_stack

          end

        )

    }

  );

if .remain == "" then empty

else

  ( . as {$string_stack}

  | _re("^\\s+"; {whitespace: .})

  // _re("^#[^\n]*"; {comment: .})

  // _re("^\\.[_a-zA-Z][_a-zA-Z0-9]*"; {index: .[1:]})

  // _re("^[_a-zA-Z][_a-zA-Z0-9]*"; {ident: .})

  // _re("^@[_a-zA-Z][_a-zA-Z0-9]*"; {at_ident: .})

  // _re("^\\$[_a-zA-Z][_a-zA-Z0-9]*"; {binding: .})

  # 1.23, .123, 123e2, 1.23e2, 123E2, 1.23e+2, 1.23E-2 or 123

  // _re("^(?:[0-9]*\\.[0-9]+|[0-9]+)(?:[eE][-\\+]?[0-9]+)?"; {number: .})

  // _re("^\"(?:[^\"\\\\]|\\\\.)*?\\\\\\(";

      ( .[1:-2]

      | _unescape

      | {string_start: ., string_stack: ($string_stack+["\\("])}

      )

    )

 .

 .

 .

This is a pretty old meme, but still worth posting here (as some may be unaware). The RFC822 Perl regex to validate email addresses is 😱:

(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t]

)+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))*"(?:(?:

\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(

?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[

\t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\0

31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*\

>(?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+

(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*](?:

(?:\r\n)?[ \t])))|(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z

|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)

?[ \t]))&lt;(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\

r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*](?:(?:\r\n)?[

\t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)

?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*](?:(?:\r\n)?[ \t]

)))(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[

\t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])

)(?:.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t]

)+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))*)

|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r

\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:

\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t

>))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031

>+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*](

?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".[] \000-\031]+(?

:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)*](?:(?

:\r\n)?[ \t])))&gt;(?:(?:\r\n)?[ \t])*)|(?:[^()<>@,;:\".[] \000-\031]+(?:(?

:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?

[ \t]))"(?:(?:\r\n)?[ \t])):(?:(?:\r\n)?[ \t])(?:(?:(?:[^()<>@,;:\".[]

\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|

\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?:[^()<>

@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"

(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))*@(?:(?:\r\n)?[ \t]

)*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\

".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?

:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[

]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))*|(?:[^()<>@,;:\".[] \000-

\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(

?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))&lt;(?:(?:\r\n)?[ \t])(?:@(?:[^()<>@,;

:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([

^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\"

.[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[\

>\r\]|\.)](?:(?:\r\n)?[ \t])))(?:,@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\".\

[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\

r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[]

\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]

|\.)](?:(?:\r\n)?[ \t])))):(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\".[] \0

00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?:[^"\r\]|\

.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?:[^()<>@,

;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]]))|"(?

:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])

(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".

[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t])*(?:[

^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[]

>))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))&gt;(?:(?:\r\n)?[ \t]))(?:,\s*(

?:(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\

".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:.(?:(

?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[

["()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t

>)))@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t

>)+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?

:.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|

\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))*|(?:

[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".[\

>]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))*&lt;(?:(?:\r\n)

?[ \t])*(?:@(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["

()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)

?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>

@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))*(?:,@(?:(?:\r\n)?[

\t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,

;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:.(?:(?:\r\n)?[ \t]

)*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\

".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))):(?:(?:\r\n)?[ \t])*)?

(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[["()<>@,;:\".

[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:.(?:(?:

\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[[

"()<>@,;:\".[]]))|"(?:[^"\r\]|\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])

+|\Z|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t]))(?:\

.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\".[] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z

|(?=[["()<>@,;:\".[]]))|[([^[]\r\]|\.)](?:(?:\r\n)?[ \t])))*&gt;(?:(

?:\r\n)?[ \t]))))?;\s*)

https://pdw.ex-parrot.com/Mail-RFC822-Address.html

I hope you had some fun. E-Mail your comments to paul@nospam.buetow.org :-)

other related posts are:

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Terminal multiplexing with `tmux`</title>

    <link href="gemini://foo.zone/gemfeed/2024-06-23-terminal-multiplexing-with-tmux.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-06-23-terminal-multiplexing-with-tmux.gmi</id>

    <updated>2024-06-23T22:41:59+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Tmux (Terminal Multiplexer) is a powerful, terminal-based tool that manages multiple terminal sessions within a single window. Here are some of its primary features and functionalities:</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='terminal-multiplexing-with-tmux'>Terminal multiplexing with <span class='inlinecode'>tmux</span></h1><br />

Published at 2024-06-23T22:41:59+03:00

Tmux (Terminal Multiplexer) is a powerful, terminal-based tool that manages multiple terminal sessions within a single window. Here are some of its primary features and functionalities:

https://github.com/tmux/tmux/wiki

     _______

    |.-----.|

    || Tmux||

    ||_.-._||

    `--)-(--`

   __[=== o]___

  |:::::::::::|\

jgs -=========-()

mod. by Paul B.

Before continuing to read this post, I encourage you to get familiar with Tmux first (unless you already know the basics). You can go through the official getting started guide:

https://github.com/tmux/tmux/wiki/Getting-Started

I can also recommend this book (this is the book I got started with with Tmux):

https://pragprog.com/titles/bhtmux2/tmux-2/

Over the years, I have built a couple of shell helper functions to optimize my workflows. Tmux is extensively integrated into my daily workflows (personal and work). I had colleagues asking me about my Tmux config and helper scripts for Tmux several times. It would be neat to blog about it so that everyone interested in it can make a copy of my configuration and scripts.

The configuration and scripts in this blog post are only the non-work-specific parts. There are more helper scripts, which I only use for work (and aren't really useful outside of work due to the way servers and clusters are structured there).

Tmux is highly configurable, and I think I am only scratching the surface of what is possible with it. Nevertheless, it may still be useful for you. I also love that Tmux is part of the OpenBSD base system!

I am a user of the Z-Shell (zsh), but I believe all the snippets mentioned in this blog post also work with Bash.

https://www.zsh.org

For the most common Tmux commands I use, I have created the following shell aliases:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

alias tl='tmux list-sessions'

alias tn=tmux::new

alias ta=tmux::attach

alias tx=tmux::remote

alias ts=tmux::search

alias tssh=tmux::cluster_ssh

Note all tmux::...; those are custom shell functions doing certain things, and they aren't part of the Tmux distribution. But let's run through every aliases one by one.

The first two are pretty straightforward. tm is simply a shorthand for tmux, so I have to type less, and tl lists all Tmux sessions that are currently open. No magic here.

The tn alias is referencing this function:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

tmux::new () {

<b><u><font color="#000000">readonly</font></u></b> session=$1

<b><u><font color="#000000">local</font></u></b> date=date

<b><u><font color="#000000">if</font></u></b> where gdate &amp;&gt;/dev/null; <b><u><font color="#000000">then</font></u></b>

    date=gdate

<b><u><font color="#000000">fi</font></u></b>

tmux::cleanup_default

<b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$session"</font> ]; <b><u><font color="#000000">then</font></u></b>

    tmux::new T$($date +%s)

<b><u><font color="#000000">else</font></u></b>

    tmux new-session -d -s $session

    tmux -<font color="#000000">2</font> attach-session -t $session || tmux -<font color="#000000">2</font> switch-client -t $session

<b><u><font color="#000000">fi</font></u></b>

}

alias tn=tmux::new

There is a lot going on here. Let's have a detailed look at what it is doing. As a note, the function relies on GNU Date, so MacOS is looking for the gdate commands to be available. Otherwise, it will fall back to date. You need to install GNU Date for Mac, as it isn't installed by default there. As I use Fedora Linux on my personal Laptop and a MacBook for work, I have to make it work for both.

First, a Tmux session name can be passed to the function as a first argument. That session name is only optional. Without it, Tmux will select a session named T$($date +%s) as a default. Which is T followed by the UNIX epoch, e.g. T1717133796.

Note also the call to tmux::cleanup_default; it would clean up all already opened default sessions if they aren't attached. Those sessions were only temporary, and I had too many flying around after a while. So, I decided to auto-delete the sessions if they weren't attached. If I want to keep sessions around, I will rename them with the Tmux command prefix-key $. This is the cleanup function:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">local</font></u></b> s

tmux list-sessions | grep <font color="#808080">'^T.*: '</font> | grep -F -v attached |

cut -d: -f<font color="#000000">1</font> | <b><u><font color="#000000">while</font></u></b> <b><u><font color="#000000">read</font></u></b> -r s; <b><u><font color="#000000">do</font></u></b>

    echo <font color="#808080">"Killing $s"</font>

    tmux kill-session -t <font color="#808080">"$s"</font>

<b><u><font color="#000000">done</font></u></b>

}

The cleanup function kills all open Tmux sessions that haven't been renamed properly yet—but only if they aren't attached (e.g., don't run in the foreground in any terminal). Cleaning them up automatically keeps my Tmux sessions as neat and tidy as possible.

Whenever I am in a temporary session (named T....), I may decide that I want to keep this session around. I have to rename the session to prevent the cleanup function from doing its thing. That's, as mentioned already, easily accomplished with the standard prefix-key $ Tmux command.

This alias refers to the following function, which tries to attach to an already-running Tmux session.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">readonly</font></u></b> session=$1

<b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$session"</font> ]; <b><u><font color="#000000">then</font></u></b>

    tmux attach-session || tmux::new

<b><u><font color="#000000">else</font></u></b>

    tmux attach-session -t $session || tmux::new $session

<b><u><font color="#000000">fi</font></u></b>

}

alias ta=tmux::attach

If no session is specified (as the argument of the function), it will try to attach to the first open session. If no Tmux server is running, it will create a new one with tmux::new. Otherwise, with a session name given as the argument, it will attach to it. If unsuccessful (e.g., the session doesn't exist), it will be created and attached to.

This SSHs into the remote server specified and then, remotely on the server itself, starts a nested Tmux session. So we have one Tmux session on the local computer and, inside of it, an SSH connection to a remote server with a Tmux session running again. The benefit of this is that, in case my network connection breaks down, the next time I connect, I can continue my work on the remote server exactly where I left off. The session name is the name of the server being SSHed into. If a session like this already exists, it simply attaches to it.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">readonly</font></u></b> server=$1

tmux new -s $server <font color="#808080">"ssh -t $server 'tmux attach-session || tmux'"</font> || \

    tmux attach-session -d -t $server

}

alias tr=tmux::remote

To make nested Tmux sessions work smoothly, one must change the Tmux prefix key locally or remotely. By default, the Tmux prefix key is Ctrl-b, so Ctrl-b $, for example, renames the current session. To change the prefix key from the standard Ctrl-b to, for example, Ctrl-g, you must add this to the tmux.conf:

set-option -g prefix C-g

This way, when I want to rename the remote Tmux session, I have to use Ctrl-g $, and when I want to rename the local Tmux session, I still have to use Ctrl-b $. In my case, I have this deployed to all remote servers through a configuration management system (out of scope for this blog post).

There might also be another way around this (without reconfiguring the prefix key), but that is cumbersome to use, as far as I remember.

Despite the fact that with tmux::cleanup_default, I don't leave a huge mess with trillions of Tmux sessions flying around all the time, at times, it can become challenging to find exactly the session I am currently interested in. After a busy workday, I often end up with around twenty sessions on my laptop. This is where fuzzy searching for session names comes in handy, as I often don't remember the exact session names.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">local</font></u></b> -r session=$(tmux list-sessions | fzf | cut -d: -f<font color="#000000">1</font>)

<b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$TMUX"</font> ]; <b><u><font color="#000000">then</font></u></b>

    tmux attach-session -t $session

<b><u><font color="#000000">else</font></u></b>

    tmux switch -t $session

<b><u><font color="#000000">fi</font></u></b>

}

alias ts=tmux::search

All it does is list all currently open sessions in fzf, where one of them can be searched and selected through fuzzy find, and then either switch (if already inside a session) to the other session or attach to the other session (if not yet in Tmux).

You must install the fzf command on your computer for this to work. This is how it looks like:

Before I used Tmux, I was a heavy user of ClusterSSH, which allowed me to log in to multiple servers at once in a single terminal window and type and run commands on all of them in parallel.

https://github.com/duncs/clusterssh

However, since I started using Tmux, I retired ClusterSSH, as it came with the benefit that Tmux only needs to be run in the terminal, whereas ClusterSSH spawned terminal windows, which aren't easily portable (e.g., from a Linux desktop to macOS). The tmux::cluster_ssh function can have N arguments, where:

This is the function definition behind the tssh alias:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">if</font></u></b> [ -f <font color="#808080">"$1"</font> ]; <b><u><font color="#000000">then</font></u></b>

    tmux::tssh_from_file $1

    <b><u><font color="#000000">return</font></u></b>

<b><u><font color="#000000">fi</font></u></b>

tmux::tssh_from_argument $@

}

alias tssh=tmux::cluster_ssh

This function is just a wrapper around the more complex tmux::tssh_from_file and tmux::tssh_from_argument functions, as you have learned already. Most of the magic happens there.

This is the most magic helper function we will cover in this post. It looks like this:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">local</font></u></b> -r session=$1; <b><u><font color="#000000">shift</font></u></b>

<b><u><font color="#000000">local</font></u></b> first_server=$1; <b><u><font color="#000000">shift</font></u></b>

tmux new-session -d -s $session <font color="#808080">"ssh -t $first_server"</font>

<b><u><font color="#000000">if</font></u></b> ! tmux list-session | grep <font color="#808080">"^$session:"</font>; <b><u><font color="#000000">then</font></u></b>

    echo <font color="#808080">"Could not create session $session"</font>

    <b><u><font color="#000000">return</font></u></b> <font color="#000000">2</font>

<b><u><font color="#000000">fi</font></u></b>

<b><u><font color="#000000">for</font></u></b> server <b><u><font color="#000000">in</font></u></b> <font color="#808080">"${@[@]}"</font>; <b><u><font color="#000000">do</font></u></b>

    tmux split-window -t $session <font color="#808080">"tmux select-layout tiled; ssh -t $server"</font>

<b><u><font color="#000000">done</font></u></b>

tmux setw -t $session synchronize-panes on

tmux -<font color="#000000">2</font> attach-session -t $session | tmux -<font color="#000000">2</font> switch-client -t $session

}

It expects at least two arguments. The first argument is the session name to create for the clustered SSH session. All other arguments are server hostnames or FQDNs to which to connect. The first one is used to make the initial session. All remaining ones are added to that session with tmux split-window -t $session.... At the end, we enable synchronized panes by default, so whenever you type, the commands will be sent to every SSH connection, thus allowing the neat ClusterSSH feature to run commands on multiple servers simultaneously. Once done, we attach (or switch, if already in Tmux) to it.

Sometimes, I don't want the synchronized panes behavior and want to switch it off temporarily. I can do that with prefix-key p and prefix-key P after adding the following to my local tmux.conf:

bind-key p setw synchronize-panes off

bind-key P setw synchronize-panes on

This one sets the session name to the file name and then reads a list of servers from that file, passing the list of servers to tmux::tssh_from_argument as the arguments. So, this is a neat little wrapper that also enables me to open clustered SSH sessions from an input file.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<b><u><font color="#000000">local</font></u></b> -r serverlist=$1; <b><u><font color="#000000">shift</font></u></b>

<b><u><font color="#000000">local</font></u></b> -r session=$(basename $serverlist | cut -d. -f<font color="#000000">1</font>)

tmux::tssh_from_argument $session $(awk <font color="#808080">'{ print $1} '</font> $serverlist | sed <font color="#808080">'s/.lan./.lan/g'</font>)

}

To open a new session named fish and log in to 4 remote hosts, run this command (Note that it is also possible to specify the remote user):

$ tssh fish blowfish.buetow.org fishfinger.buetow.org \

fishbone.buetow.org user@octopus.buetow.org

To open a new session named manyservers, put many servers (one FQDN per line) into a file called manyservers.txt and simply run:

$ tssh manyservers.txt

These are default Tmux commands that I make heavy use of in a tssh session:

As you will see later in this blog post, I have configured a history limit of 1 million items in Tmux so that I can scroll back quite far. One main workflow of mine is to search for text in the Tmux history, select and copy it, and then switch to another window or session and paste it there (e.g., into my text editor to do something with it).

This works by pressing prefix-key [ to enter Tmux copy mode. From there, I can browse the Tmux history of the current window using either the arrow keys or vi-like navigation (see vi configuration later in this blog post) and the Pg-Dn and Pg-Up keys.

I often search the history backwards with prefix-key [ followed by a ?, which opens the Tmux history search prompt.

Once I have identified the terminal text to be copied, I enter visual select mode with v, highlight all the text to be copied (using arrow keys or Vi motions), and press y to yank it (sorry if this all sounds a bit complicated, but Vim/NeoVim users will know this, as it is pretty much how you do it there as well).

For v and y to work, the following has to be added to the Tmux configuration file:

bind-key -T copy-mode-vi 'v' send -X begin-selection

bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

Once the text is yanked, I switch to another Tmux window or session where, for example, a text editor is running and paste the yanked text from Tmux into the editor with prefix-key ]. Note that when pasting into a modal text editor like Vi or Helix, you would first need to enter insert mode before prefix-key ] would paste anything.

Some features I have configured directly in Tmux don't require an external shell alias to function correctly. Let's walk line by line through my local ~/.config/tmux/tmux.conf:

source ~/.config/tmux/tmux.local.conf

set-option -g allow-rename off

set-option -g history-limit 100000

set-option -g status-bg '#444444'

set-option -g status-fg '#ffa500'

set-option -s escape-time 0

There's yet to be much magic happening here. I source a tmux.local.conf, which I sometimes use to override the default configuration that comes from the configuration management system. But it is mostly just an empty file, so it doesn't throw any errors on Tmux startup when I don't use it.

I work with many terminal outputs, which I also like to search within Tmux. So, I added a large enough history-limit, enabling me to search backwards in Tmux for any output up to a million lines of text.

Besides changing some colours (personal taste), I also set escape-time to 0, which is just a workaround. Otherwise, my Helix text editor's ESC key would take ages to trigger within Tmux. I am trying to remember the gory details. You can leave it out; if everything works fine for you, leave it out.

The next lines in the configuration file are:

set-window-option -g mode-keys vi

bind-key -T copy-mode-vi 'v' send -X begin-selection

bind-key -T copy-mode-vi 'y' send -X copy-selection-and-cancel

I navigate within Tmux using Vi keybindings, so the mode-keys is set to vi. I use the Helix modal text editor, which is close enough to Vi bindings for simple navigation to feel "native" to me. (By the way, I have been a long-time Vim and NeoVim user, but I eventually switched to Helix. It's off-topic here, but it may be worth another blog post once.)

The two bind-key commands make it so that I can use v and y in copy mode, which feels more Vi-like (as already discussed earlier in this post).

The next set of lines in the configuration file are:

bind-key h select-pane -L

bind-key j select-pane -D

bind-key k select-pane -U

bind-key l select-pane -R

bind-key H resize-pane -L 5

bind-key J resize-pane -D 5

bind-key K resize-pane -U 5

bind-key L resize-pane -R 5

These allow me to use prefix-key h, prefix-key j, prefix-key k, and prefix-key l for switching panes and prefix-key H, prefix-key J, prefix-key K, and prefix-key L for resizing the panes. If you don't know Vi/Vim/NeoVim, the letters hjkl are commonly used there for left, down, up, and right, which is also the same for Helix, by the way.

The next set of lines in the configuration file are:

bind-key c new-window -c '#{pane_current_path}'

bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t"

bind-key T choose-tree

The first one is that any new window starts in the current directory. The second one is more interesting. I list all open sessions in the fuzzy finder. I rely heavily on this during my daily workflow to switch between various sessions depending on the task. E.g. from a remote cluster SSH session to a local code editor.

The third one, choose-tree, opens a tree view in Tmux listing all sessions and windows. This one is handy to get a better overview of what is currently running in any local Tmux session. It looks like this (it also allows me to press a hotkey to switch to a particular Tmux window):

The last remaining lines in my configuration file are:

bind-key p setw synchronize-panes off

bind-key P setw synchronize-panes on

bind-key r source-file ~/.config/tmux/tmux.conf ; display-message "tmux.conf reloaded"

We discussed synchronized panes earlier. I use it all the time in clustered SSH sessions. When enabled, all panes (remote SSH sessions) receive the same keystrokes. This is very useful when you want to run the same commands on many servers at once, such as navigating to a common directory, restarting a couple of services at once, or running tools like htop to quickly monitor system resources.

The last one reloads my Tmux configuration on the fly.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Projects I currently don't have time for</title>

    <link href="gemini://foo.zone/gemfeed/2024-05-03-projects-i-currently-dont-have-time-for.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-05-03-projects-i-currently-dont-have-time-for.gmi</id>

    <updated>2024-05-03T16:23:03+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Over the years, I have collected many ideas for my personal projects and noted them down. I am currently in the process of cleaning up all my notes and reviewing those ideas. I don’t have time for the ones listed here and won’t have any soon due to other commitments and personal projects. So, in order to 'get rid of them' from my notes folder, I decided to simply put them in this blog post so that those ideas don't get lost. Maybe I will pick up one or another idea someday in the future, but for now, they are all put on ice in favor of other personal projects or family time.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='projects-i-currently-don-t-have-time-for'>Projects I currently don&#39;t have time for</h1><br />

Published at 2024-05-03T16:23:03+03:00

Over the years, I have collected many ideas for my personal projects and noted them down. I am currently in the process of cleaning up all my notes and reviewing those ideas. I don’t have time for the ones listed here and won’t have any soon due to other commitments and personal projects. So, in order to "get rid of them" from my notes folder, I decided to simply put them in this blog post so that those ideas don't get lost. Maybe I will pick up one or another idea someday in the future, but for now, they are all put on ice in favor of other personal projects or family time.

Art by Laura Brown

.'~~~~~~~~~~~'.

( .'11 12 1'. )

| :10 \ 2: |

| :9 @-> 3: |

| :8 4; |

'. '..7 6 5..' .'

~-------------~ ldb

The idea was to build the ultimate Arch Linux setup on an old ThinkPad X200 booting with the open-source LibreBoot firmware, complete with a tiling window manager, dmenu, and all the elite tools. This is mainly for fun, as I am pretty happy (and productive) with my Fedora Linux setup. I ran EndeavourOS (close enough to Arch) on an old ThinkPad for a while, but then I switched back to Fedora because the rolling releases were annoying (there were too many updates).

In my student days, I operated a 486DX PC with OpenBSD as my home DSL internet router. I bought the setup from my brother back then. The router's hostname was fishbone, and it performed very well until it became too slow for larger broadband bandwidth after a few years of use.

I had the idea to revive this concept, implement fishbone2, and place it in front of my proprietary ISP router to add an extra layer of security and control in my home LAN. It would serve as the default gateway for all of my devices, including a Wi-Fi access point, would run a DNS server, Pi-hole proxy, VPN client, and DynDNS client. I would also implement high availability using OpenBSD's CARP protocol.

https://openbsdrouterguide.net

https://pi-hole.net/

https://www.OpenBSD.org

https://www.OpenBSD.org/faq/pf/carp.html

However, I am putting this on hold as I have opted for an OpenWRT-based solution, which was much quicker to set up and runs well enough.

https://OpenWRT.org/

Install Pi-hole on one of my Pis or run it in a container on Freekat. For now, I am putting this on hold as the primary use for this would be ad-blocking, and I am avoiding surfing ad-heavy sites anyway. So there's no significant use for me personally at the moment.

https://pi-hole.net/

The idea was to implement my smart info screen using purely open-source software. It would display information such as the health status of my personal infrastructure, my current work tracker balance (I track how much I work to prevent overworking), and my sports balance (I track my workouts to stay within my quotas for general health). The information would be displayed on a small screen in my home office, on my Pine watch, or remotely from any terminal window.

I don't have this, and I haven't missed having it, so I guess it would have been nice to have it but not provide any value other than the "fun of tinkering."

I wanted to create the most comfortable setup possible for reading digital notes, articles, and books. This would include a comfy armchair, a silent barebone PC or Raspberry Pi computer running either Linux or *BSD, and an e-Ink display mounted on a flexible arm/stand. There would also be a small table for my paper journal for occasional note-taking. There are a bunch of open-source software available for PDF and ePub reading. It would have been neat, but I am currently using the most straightforward solution: a Kobo Elipsa 2E, which I can use on my sofa.

I had an idea to build a computer infused with retro elements. It wouldn't use actual retro hardware but would look and feel like a retro machine. I would call this machine HAL or Retron.

I would use an old ThinkPad laptop placed on a horizontal stand, running NetBSD, and attaching a keyboard from ModelFkeyboards. I use WindowMaker as a window manager and run terminal applications through Retro Term. For the monitor, I would use an older (black) EIZO model with large bezels.

https://www.NetBSD.org

https://www.modelfkeyboards.com

https://github.com/Swordfish90/cool-retro-term)

The computer would occasionally be used to surf the Gemini space, take notes, blog, or do light coding. However, I have abandoned the project for now because there isn't enough space in my apartment, as my daughter will have a room for herself.

My idea involved using a barebone mini PC running FreeBSD with the Navidrome sound server software. I could remotely connect to it from my phone, workstation/laptop to listen to my music collection. The storage would be based on ZFS with at least two drives for redundancy. The app would run in a Linux Docker container under FreeBSD via Bhyve.

https://github.com/navidrome/navidrome

https://wiki.freebsd.org/bhyve

My idea involved purchasing the Meerkat mini PC from System76 and installing FreeBSD. Like the sound-server idea (see previous idea), it would run Linux Docker through Bhyve. I would self-host a bunch of applications on it:

All of this would be within my LAN, but the services would also be accessible from the internet through either Wireguard or SSH reverse tunnels to one of my OpenBSD VMs, for example:

I am abandoning this project for now, as I am currently hosting my apps on AWS ECS Fargate under *.cool.buetow.org, which is "good enough" for the time being and also offers the benefit of learning to use AWS and Terraform, knowledge that can be applied at work.

My personal AWS setup

This was a pet project idea that my brother and I had. The concept was to collect all shell history of all servers at work in a central place, apply ML/AI, and return suggestions for commands to type or allow a fuzzy search on all the commands in the history. The recommendations for the commands on a server could be context-based (e.g., past occurrences on the same server type).

You could decide whether to share your command history with others so they would receive better suggestions depending on which server they are on, or you could keep all the history private and secure. The plan was to add hooks into zsh and bash shells so that all commands typed would be pushed to the central location for data mining.

I don't use third-party cloud providers such as Google Photos to store/archive my photos. Instead, they are all on a ZFS volume on my home NAS, with regular offsite backups taken. Thus, my project would involve implementing the features I miss most or finding a solution simple enough to host on my LAN:

KISS static web photo albums with photoalbum.sh

I aimed to have a simple server to which I could sync notes and other documents, ensuring that the data is fully end-to-end encrypted. This way, only the clients could decrypt the data, while an encrypted copy of all the data would be stored on the server side. There are a few solutions (e.g., NextCloud), but they are bloated or complex to set up.

I currently use Syncthing for encrypted file sync across all my devices; however, the data is not end-to-end encrypted. It's a good-enough setup, though, as my Syncthing server is in my home LAN on an encrypted file system.

https://syncthing.net

I also had the idea of using this as a pet project for work and naming it Cryptolake, utilizing post-quantum-safe encryption algorithms and a distributed data store.

I had an idea to implement a higher-level language with strong typing that could be compiled into native Bash code. This would make all resulting Bash scripts more robust and secure by default. The project would involve developing a parser, lexer, and a Bash code generator. I planned to implement this in Go.

I had previously implemented a tiny scripting language called Fype (For Your Program Execution), which could have served as inspiration.

The Fype Programming Language

This is similar to the previous idea, but the difference is that the language would compile into a sed script. Sed has many features, but the brief syntax makes scripts challenging to read. The higher-level language would mimic sed but in a form that is easier for humans to read.

VS-Sim is an open-source simulator programmed in Java for distributed systems. VS-Sim stands for "Verteilte Systeme Simulator," the German translation for "Distributed Systems Simulator." The VS-Sim project was my diploma thesis at Aachen University of Applied Sciences.

https://codeberg.org/snonux/vs-sim

The ideas I had was:

I have put this project on hold for now, as I want to do more things in Go and fewer in Java in my personal time.

My idea was to program a KISS (Keep It Simple, Stupid) ticketing system for my personal use. However, I am abandoning this project because I now use the excellent Taskwarrior software. You can learn more about it at:

https://taskwarrior.org/

At work, an internal service allocates storage space for our customers on our storage clusters. It automates many tasks, but many tweaks are accessible through APIs. I had the idea to implement a Ruby-based DSL that would make using all those APIs for ad-hoc changes effortless, e.g.:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Customer.C1A1.segments.volumes.each do |volume|

puts volume.usage_stats

volume.move_off! <b><u><font color="#000000">if</font></u></b> volume.over_subscribed?

end

end

I am abandoning this project because my workplace has stopped the annual pet project competition, and I have other more important projects to work on at the moment.

Creative universe (Work pet project contests)

I value privacy. It would be great to run my own Matrix server for communication within my family. I have yet to have time to look into this more closely.

https://matrix.org

Ampache is an open-source music streaming server that allows you to host and manage your music collection online, accessible via a web interface. Setting it up involves configuring a web server, installing Ampache, and organising your music files, which can be time-consuming.

Librum is a self-hostable e-book reader that allows users to manage and read their e-book collection from a web interface. Designed to be a self-contained platform where users can upload, organise, and access their e-books, Librum emphasises privacy and control over one's digital library.

https://github.com/Librum-Reader/Librum

I am using my Kobo devices or my laptop to read these kinds of things for now.

Memos is a note-taking service that simplifies and streamlines information capture and organisation. It focuses on providing users with a minimalistic and intuitive interface, aiming to enhance productivity without the clutter commonly associated with more complex note-taking apps.

https://www.usememos.com

I am abandoning this idea for now, as I am currently using plain Markdown files for notes and syncing them with Syncthing across my devices.

Bepasty is like a Pastebin for all kinds of files (text, image, audio, video, documents, binary, etc.). It seems very neat, but I only share a little nowadays. When I do, I upload files via SCP to one of my OpenBSD VMs and serve them via vanilla httpd there, keeping it KISS.

https://github.com/bepasty/bepasty-server

I consider myself an advanced programmer in Ruby, Bash, and Perl. However, Python seems to be ubiquitous nowadays, and most of my colleagues prefer Python over any other languages. Thus, it makes sense for me to also learn and use Python. After conducting some research, "Fluent Python" appears to be the best book for this purpose.

I don't have time to read this book at the moment, as I am focusing more on Go (Golang) and I know just enough Python to get by (e.g., for code reviews). Additionally, there are still enough colleagues around who can review my Ruby or Bash code.

I've read a couple of Ruby books already, but "Programming Ruby," which covers up to Ruby 3.2, was just recently released. I would like to read this to deepen my Ruby knowledge further and to revisit some concepts that I may have forgotten.

As stated in this blog post, I am currently more eager to focus on Go, so I've put the Ruby book on hold. Additionally, there wouldn't be enough colleagues who could "understand" my advanced Ruby skills anyway, as most of them are either Java developers or SREs who don't code a lot.

I am a big fan of science fiction, but my reading list is currently too long anyway. So, I've put the Hamilton books on the back burner for now. You can see all the novels I've read here:

https://paul.buetow.org/novels.html

gemini://paul.buetow.org/novels.gmi

The website "Why Raku Rox" would showcase the unique features and benefits of the Raku programming language and highlight why it is an exceptional choice for developers. Raku, originally known as Perl 6, is a dynamic, expressive language designed for flexible and powerful software development.

This would be similar to the "Why OpenBSD rocks" site:

https://why-openbsd.rocks

https://raku.org

I am not working on this for now, as I currently don’t even have time to program in Raku.

For work: Implement a PoC that dumps Java heaps to extract secrets from memory. Based on the findings, write a Java program that encrypts secrets in the kernel using the memfd_secret() syscall to make it even more secure.

https://lwn.net/Articles/865256/

Due to other priorities, I am putting this on hold for now. The software we have built is pretty damn secure already!

This research project, based on Brendan Gregg's blog post, could potentially significantly impact my work.

https://brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html

The research project would involve setting up dashboards that display actual CPU usage and the cycles versus waiting time for memory access.

E-Mail your comments to paul@nospam.buetow.org :-)

Related and maybe interesting:

Sweating the small stuff - Tiny projects of mine

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'Slow Productivity' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2024-05-01-slow-productivity-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-05-01-slow-productivity-book-notes.gmi</id>

    <updated>2024-04-27T14:18:51+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'Slow Productivity - The lost Art of Accomplishment Without Burnout' by Cal Newport.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='slow-productivity-book-notes'>"Slow Productivity" book notes</h1><br />

Published at 2024-04-27T14:18:51+03:00

These are my personal takeaways after reading "Slow Productivity - The lost Art of Accomplishment Without Burnout" by Cal Newport.

The case studies in this book were a bit long, but they appeared to be well-researched. I will only highlight the interesting, actionable items in the book notes.

These notes are mainly for my own use, but you may find them helpful.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

"Slow productivity" does not mean being less productive. Cal Newport wants to point out that you can be much more productive with "slow productivity" than you would be without it. It is a different way of working than most of us are used to in the modern workplace, which is hyper-connected and always online.

People use visible activity instead of real productivity because it's easier to measure. This is called pseudo-productivity.

Pseudo-productivity is used as a proxy for real productivity. If you don't look busy, you are dismissed as lazy or lacking a work ethic.

There is a tendency to perform shallow work because people will otherwise dismiss you as lazy. A lot of shallow work can cause burnout, as multiple things are often being worked on in parallel. The more you have on your plate, the more stressed you will be.

Shallow work usually doesn't help you to accomplish big things. Always have the big picture in mind. Shallow work can't be entirely eliminated, but it can be managed—for example, plan dedicated time slots for certain types of shallow work.

The overall perception is that if you want to accomplish something, you must put yourself on the verge of burnout. Cal Newport writes about "The lost Art of Accomplishments without Burnouts", where you can accomplish big things without all the stress usually involved.

There are three principles for the maintenance of a sustainable work life:

There will always be more work. The faster you finish it, the quicker you will have something new on your plate.

Reduce the overhead tax. The overhead tax is all the administrative work to be done. With every additional project, there will also be more administrative stuff to be done on your work plate. So, doing fewer things leads to more and better output and better quality for the projects you are working on.

Limit the things on your plate. Limit your missions (personal goals, professional goals). Reduce your main objectives in life. More than five missions are usually not sustainable very easily, so you have to really prioritise what is important to you and your professional life.

A mission is an overall objective/goal that can have multiple projects. Limit the projects as well. Some projects need clear endings (e.g., work in support of a never-ending flow of incoming requests). In this case, set limits (e.g., time box your support hours). You can also plan "office hours" for collaborative work with colleagues to avoid ad hoc distractions.

The key point is that after making these commitments, you really deliver on them. This builds trust, and people will leave you alone and not ask for progress all the time.

Doing fever things is essential for modern knowledge workers. Breathing space in your work also makes you more creative and happier overall.

Pushing workers more work can make them less productive, so the better approach is the pull model, where workers pull in new work when the previous task is finished.

If you can quantify how busy you are or how many other projects you already work on, then it is easier to say no to new things. For example, show what you are doing, what's in the roadmap, etc. Transparency is the key here.

You can have your own simulated pull system if the company doesn't agree to a global one:

Sometimes, a little friction is all that is needed to combat incoming work, e.g., when your manager starts seeing the reality of your work plate, and you also request additional information for the task. If you already have too much on your plate, then decline the new project or make room for it in your calendar. If you present a large task list, others will struggle to assign more to you.

Limit your daily goals. A good measure is to focus on one goal per day. You can time block time for deep work on your daily goal. During that time, you won't be easily available to others.

The battle against distractions must be fought to be the master of your time. Nobody will fight this war for you. You have to do it for yourself. (Also, have a look at Cal Newport's "time block planning" method).

Put tasks on autopilot (regular recurring tasks).

We suffer from overambitious timelines, task lists, and business. Focus on what matters. Don't rush your most important work to achieve better results.

Don't rush. If you rush or are under pressure, you will be less effective and eventually burn out. Our brains work better then not rushy. The stress heuristic usually indicates too much work, and it is generally too late to reduce workload. That's why we all typically have dangerously too much to do.

Have the courage to take longer to do things that are important. For example, plan on a yearly and larger scale, like 2 to 5 years.

Find a reasonable time for a project and then double the project timeline against overconfident optimism. Humans are not great at estimating. They gravitate towards best-case estimates. If you have planned more than enough time for your project, then you will fall into a natural work pace. Otherwise, you will struggle with rushing and stress.

Some days will still be intense and stressful, but those are exceptional cases. After those exceptions (e.g., finalizing that thing, etc.), calmer periods will follow again.

Pace yourself over modest results over time. Simplify and reduce the daily task lists. Meetings: Certain hours are protected for work. For each meeting, add a protected block to your calendar, so you attend meetings only half a day max.

Schedule slow seasons (e.g., when on vacation). Disconnect in the slow season. Doing nothing will not satisfy your mind, though. You could read a book on your subject matter to counteract that.

Obsess over quality even if you lose short-term opportunities by rejecting other projects. Quality demands you slow down. The two previous two principles (do fewer things and work at a natural pace) are mandatory for this principle to work:

Go pro to save time, and don't squeeze everything out that you can from freemium services. Professional software services eliminate administrative work:

Adjust your workplace to what you want to accomplish. You could have dedicated places in your home for different things, e.g., a place where you read and think (armchair) and a place where you collaborate (your desk or whiteboard). Surround yourself with things that inspire you (e.g., your favourite books on your shelf next to you, etc.).

There is the concept of quiet quitting. It doesn't mean quitting your job, but it means that you don't go beyond and above the expectations people have of you. Quiet quitting became popular with modern work, which is often meaningless and full of shallow tasks. If you obsess over quality, you enjoy your craft and want to go beyond and above.

Implement rituals and routines which shift you towards your goals:

Deciding what not to do is as important as deciding what to do.

It appears to be money thrown out of the window, but you get a $50 expensive paper notebook (and also a good pen). Unconsciously, it will make you take notes more seriously. You will think about what to put into the notebooks more profoundly and have thought through the ideas more intensively. If you used very cheap notebooks, you would scribble a lot of rubbish and wouldn't even recognise your handwriting after a while anymore. So choosing a high-quality notebook will help you to take higher-quality notes, too.

Slow productivity is actionable and can be applied immediately.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes (You are currently reading this)

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>KISS high-availability with OpenBSD</title>

    <link href="gemini://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.gmi</id>

    <updated>2024-03-30T22:12:56+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I have always wanted a highly available setup for my personal websites. I could have used off-the-shelf hosting solutions or hosted my sites in an AWS S3 bucket. I have used technologies like (in unsorted and slightly unrelated order) BGP, LVS/IPVS, ldirectord, Pacemaker, STONITH, scripted VIP failover via ARP, heartbeat, heartbeat2, Corosync, keepalived, DRBD, and commercial F5 Load Balancers for high availability at work. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='kiss-high-availability-with-openbsd'>KISS high-availability with OpenBSD</h1><br />

Published at 2024-03-30T22:12:56+02:00

I have always wanted a highly available setup for my personal websites. I could have used off-the-shelf hosting solutions or hosted my sites in an AWS S3 bucket. I have used technologies like (in unsorted and slightly unrelated order) BGP, LVS/IPVS, ldirectord, Pacemaker, STONITH, scripted VIP failover via ARP, heartbeat, heartbeat2, Corosync, keepalived, DRBD, and commercial F5 Load Balancers for high availability at work.

But still, my personal sites were never highly available. All those technologies are great for professional use, but I was looking for something much more straightforward for my personal space - something as KISS (keep it simple and stupid) as possible.

It would be fine if my personal website wasn't highly available, but the geek in me wants it anyway.

PS: ASCII-art below reflects an OpenBSD under-water world with all the tools available in the base system.

Art by Michael J. Penick (mod. by Paul B.)

                                           ACME-sky

    __________

   / nsd tower\                                             (

  /____________\                                           (\) awk-ward

   |:_:_:_:_:_|                                             ))   plant

   |_:_,--.:_:|                       dig-bubble         (\//   )

   |:_:|__|_:_|  relayd-castle          _               ) ))   ((

_  |_   _  :_:|   _   _   _            (_)             ((((   /)\`

| || || | | | || |_| | o \)) (( (

\_:_:_:_:/|_|_|_|\:_:_:_:_/             .                ((   ))))

 |_,-._:_:_:_:_:_:_:_.-,_|                                )) ((//

 |:|_|:_:_:,---,:_:_:|_|:|                               ,-.  )/

 |_:_:_:_,&#39;puffy `,_:_:_:_|           _  o               ,;&#39;))((

 |:_:_:_/  _ | _  \_:_:_:|          (_O                   ((  ))

_____|::| (o)-(o) |::|--'`-. ,--. ksh under-water (((&#39;/

', ;|:::| -( .-. )- |:::| ', ; --._\ /,---.~ goat \))

. |_:_:_| \-'/ |::_|. . /().__( ) .,-----'`-(( sed-root

', ;|:::| -&#39; |:_:_:| &#39;, ; &#39;, ; --'| \ ', ; ', ; ',')).,--

. MJP . . . . httpd-soil . . . . . . `

', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ; ', ;

My HA solution for Web and Gemini is based on DNS (OpenBSD's nsd) and a simple shell script (OpenBSD's ksh and some little sed and awk and grep). All software used here is part of the OpenBSD base system and no external package needs to be installed - OpenBSD is a complete operating system.

https://man.OpenBSD.org/nsd.8

https://man.OpenBSD.org/ksh

https://man.OpenBSD.org/awk

https://man.OpenBSD.org/sed

https://man.OpenBSD.org/dig

https://man.OpenBSD.org/ftp

https://man.OpenBSD.org/cron

I also used the dig (for DNS checks) and ftp (for HTTP/HTTPS checks) programs.

The DNS failover is performed automatically between the two OpenBSD VMs involved (my setup doesn't require any quorum for a failover, so there isn't a need for a 3rd VM). The ksh script, executed once per minute via CRON (on both VMs), performs a health check to determine whether the current master node is available. If the current master isn't available (no HTTP response as expected), a failover is performed to the standby VM:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

ZONES_DIR=/var/nsd/zones/master/

DEFAULT_MASTER=fishfinger.buetow.org

DEFAULT_STANDBY=blowfish.buetow.org

determine_master_and_standby () {

<b><u><font color="#000000">local</font></u></b> master=$DEFAULT_MASTER

<b><u><font color="#000000">local</font></u></b> standby=$DEFAULT_STANDBY

.

.

.

<b><u><font color="#000000">local</font></u></b> -i health_ok=<font color="#000000">1</font>

<b><u><font color="#000000">if</font></u></b> ! ftp -<font color="#000000">4</font> -o - https://$master/index.txt | grep -q <font color="#808080">"Welcome to $master"</font>; <b><u><font color="#000000">then</font></u></b>

    echo <font color="#808080">"https://$master/index.txt IPv4 health check failed"</font>

    health_ok=<font color="#000000">0</font>

<b><u><font color="#000000">elif</font></u></b> ! ftp -<font color="#000000">6</font> -o - https://$master/index.txt | grep -q <font color="#808080">"Welcome to $master"</font>; <b><u><font color="#000000">then</font></u></b>

    echo <font color="#808080">"https://$master/index.txt IPv6 health check failed"</font>

    health_ok=<font color="#000000">0</font>

<b><u><font color="#000000">fi</font></u></b>

<b><u><font color="#000000">if</font></u></b> [ $health_ok -eq <font color="#000000">0</font> ]; <b><u><font color="#000000">then</font></u></b>

    <b><u><font color="#000000">local</font></u></b> tmp=$master

    master=$standby

    standby=$tmp

<b><u><font color="#000000">fi</font></u></b>

.

.

.

}

The failover scripts looks for the ; Enable failover string in the DNS zone files and swaps the A and AAAA records of the DNS entries accordingly:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

    <font color="#000000">300</font> IN A <font color="#000000">46.23</font>.<font color="#000000">94.99</font> ; Enable failover

    <font color="#000000">300</font> IN AAAA 2a03:<font color="#000000">6000</font>:6f67:<font color="#000000">624</font>::<font color="#000000">99</font> ; Enable failover

www 300 IN A 46.23.94.99 ; Enable failover

www 300 IN AAAA 2a03:6000:6f67:624::99 ; Enable failover

standby 300 IN A 23.88.35.144 ; Enable failover

standby 300 IN AAAA 2a01:4f8:c17:20f1::42 ; Enable failover

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

sed -E '

/IN A .*; Enable failover/ {

/^standby/! {

s/^(.) 300 IN A (.) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/master_a)' ; \3/;

}

/^standby/ {

s/^(.) 300 IN A (.) ; (.*)/\1 300 IN A '$(cat /var/nsd/run/standby_a)' ; \3/;

}

}

/IN AAAA .*; Enable failover/ {

/^standby/! {

s/^(.) 300 IN AAAA (.) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/master_aaaa)' ; \3/;

}

/^standby/ {

s/^(.) 300 IN AAAA (.) ; (.*)/\1 300 IN AAAA '$(cat /var/nsd/run/standby_aaaa)' ; \3/;

}

}

/ ; serial/ {

s/^( +) ([0-9]+) .; (.)/\1 '$(date +%s)' ; \3/;

}

'

}

After the failover, the script reloads nsd and performs a sanity check to see if DNS still works. If not, a rollback will be performed:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

if [ -f $zone_file.bak ]; then

mv $zone_file.bak $zone_file

fi

cat $zone_file | transform > $zone_file.new.tmp

grep -v ' ; serial' $zone_file.new.tmp > $zone_file.new.noserial.tmp

grep -v ' ; serial' $zone_file > $zone_file.old.noserial.tmp

echo "Has zone $zone_file changed?"

if diff -u $zone_file.old.noserial.tmp $zone_file.new.noserial.tmp; then

echo <font color="#808080">"The zone $zone_file hasn't changed"</font>

rm $zone_file.*.tmp

<b><u><font color="#000000">return</font></u></b> <font color="#000000">0</font>

fi

cp $zone_file $zone_file.bak

mv $zone_file.new.tmp $zone_file

rm $zone_file.*.tmp

echo "Reloading nsd"

nsd-control reload

if ! zone_is_ok $zone; then

echo <font color="#808080">"Rolling back $zone_file changes"</font>

cp $zone_file $zone_file.invalid

mv $zone_file.bak $zone_file

echo <font color="#808080">"Reloading nsd"</font>

nsd-control reload

zone_is_ok $zone

<b><u><font color="#000000">return</font></u></b> <font color="#000000">3</font>

fi

for cleanup in invalid bak; do

<b><u><font color="#000000">if</font></u></b> [ -f $zone_file.$cleanup ]; <b><u><font color="#000000">then</font></u></b>

    rm $zone_file.$cleanup

<b><u><font color="#000000">fi</font></u></b>

done

echo "Failover of zone $zone to $MASTER completed"

return 1

A non-zero return code (here, 3 when a rollback and 1 when a DNS failover was performed) will cause CRON to send an E-Mail with the whole script output.

The authorative nameserver for my domains runs on both VMs, and both are configured to be a "master" DNS server so that they have their own individual zone files, which can be changed independently. Otherwise, my setup wouldn't work. The side effect is that under a split-brain scenario (both VMs cannot see each other), both would promote themselves to master via their local DNS entries. More about that later, but that's fine in my use case.

Check out the whole script here:

dns-failover.ksh

I am renting two small OpenBSD VMs: One at OpenBSD Amsterdam and the other at Hetzner Cloud. So, both VMs are hosted at another provider, in different IP subnets, and in different countries (the Netherlands and Germany).

https://OpenBSD.Amsterdam

https://www.Hetzner.cloud

I only have a little traffic on my sites. I could always upload the static content to AWS S3 if I suddenly had to. But this will never be required.

A DNS-based failover is cheap, as there isn't any BGP or fancy load balancer to pay for. Small VMs also cost less than millions.

A DNS failover doesn't happen immediately. I've configured a DNS TTL of 300 seconds, and the failover script checks once per minute whether to perform a failover or not. So, in total, a failover can take six minutes (not including other DNS caching servers somewhere in the interweb, but that's fine - eventually, all requests will resolve to the new master after a failover).

A split-brain scenario between the old master and the new master might happen. That's OK, as my sites are static, and there's no database to synchronise other than HTML, CSS, and images when the site is updated.

With the DNS failover, HTTP, HTTPS, and Gemini protocols are failovered. This works because all domain virtual hosts are configured on either VM's httpd (OpenBSD's HTTP server) and relayd (it's also part of OpenBSD and I use it to TLS offload the Gemini protocol). So, both VMs accept requests for all the hosts. It's just a matter of the DNS entries, which VM receives the requests.

https://man.OpenBSD.org/httpd.8

https://man.OpenBSD.org/relayd.8

For example, the master is responsible for the https://www.foo.zone and https://foo.zone hosts, whereas the standby can be reached via https://standby.foo.zone (port 80 for plain HTTP works as well). The same principle is followed with all the other hosts, e.g. irregular.ninja, paul.buetow.org and so on. The same applies to my Gemini capsules for gemini://foo.zone, gemini://standby.foo.zone, gemini://paul.buetow.org and gemini://standby.paul.buetow.org.

On DNS failover, master and standby swap roles without config changes other than the DNS entries. That's KISS (keep it simple and stupid)!

All my hosts use TLS certificates from Let's Encrypt. The ACME automation for requesting and keeping the certificates valid (up to date) requires that the host requesting a certificate from Let's Encrypt is also the host using that certificate.

If the master always serves foo.zone and the standby always standby.foo.zone, then there would be a problem after the failover, as the new master wouldn't have a valid certificate for foo.zone and the new standby wouldn't have a valid certificate for standby.foo.zone which would lead to TLS errors on the clients.

As a solution, the CRON job responsible for the DNS failover also checks for the current week number of the year so that:

Which translates to:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

local -i -r week_of_the_year=$(date +%U)

if [ $(( week_of_the_year % 2 )) -eq 0 ]; then

<b><u><font color="#000000">local</font></u></b> tmp=$master

master=$standby

standby=$tmp

fi

This way, a DNS failover is performed weekly so that the ACME automation can update the Let's Encrypt certificates (for master and standby) before they expire on each VM.

The ACME automation is yet another daily CRON script /usr/local/bin/acme.sh. It iterates over all of my Let's Encrypt hosts, checks whether they resolve to the same IP address as the current VM, and only then invokes the ACME client to request or renew the TLS certificates. So, there are always correct requests made to Let's Encrypt.

Let's encrypt certificates usually expire after 3 months, so a weekly failover of my VMs is plenty.

acme.sh.tpl - Rex template for the acme.sh script of mine.

https://man.OpenBSD.org/acme-client.1

Let's Encrypt with OpenBSD and Rex

CRON is sending me an E-Mail whenever a failover is performed (or whenever a failover failed). Furthermore, I am monitoring my DNS servers and hosts through Gogios, the monitoring system I have developed.

https://codeberg.org/snonux/gogios

KISS server monitoring with Gogios

Gogios, as I developed it by myself, isn't part of the OpenBSD base system.

I use Rexify, a friendly configuration management system that allows automatic deployment and configuration.

https://www.rexify.org

codeberg.org/snonux/rexfiles/frontends

Rex isn't part of the OpenBSD base system, but I didn't need to install any external software on OpenBSD either as Rex is invoked from my Laptop!

Other high-available services running on my OpenBSD VMs are my MTAs for mail forwarding (OpenSMTPD - also part of the OpenBSD base system) and the authoritative DNS servers (nsd) for all my domains. No particular HA setup is required, though, as the protocols (SMTP and DNS) already take care of the failover to the next available host!

https://www.OpenSMTPD.org/

As a password manager, I use geheim, a command-line tool I wrote in Ruby with encrypted files in a git repository (I even have it installed in Termux on my Phone). For HA reasons, I simply updated the client code so that it always synchronises the database with both servers when I run the sync command there.

https://codeberg.org/snonux/geheim

E-Mail your comments to paul@nospam.buetow.org :-)

Other *BSD and KISS related posts are:

2016-04-09 Jails and ZFS with Puppet on FreeBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2022-10-30 Installing DTail on OpenBSD

2023-06-01 KISS server monitoring with Gogios

2023-10-29 KISS static web photo albums with photoalbum.sh

2024-01-13 One reason why I love OpenBSD

2024-04-01 KISS high-availability with OpenBSD (You are currently reading this)

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>A fine Fyne Android app for quickly logging ideas programmed in Go</title>

    <link href="gemini://foo.zone/gemfeed/2024-03-03-a-fine-fyne-android-app-for-quickly-logging-ideas-programmed-in-golang.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-03-03-a-fine-fyne-android-app-for-quickly-logging-ideas-programmed-in-golang.gmi</id>

    <updated>2024-03-03T00:07:21+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I am an ideas person. I find myself frequently somewhere on the streets with an idea in my head but no paper journal noting it down. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='a-fine-fyne-android-app-for-quickly-logging-ideas-programmed-in-go'>A fine Fyne Android app for quickly logging ideas programmed in Go</h1><br />

Published at 2024-03-03T00:07:21+02:00

I am an ideas person. I find myself frequently somewhere on the streets with an idea in my head but no paper journal noting it down.

I have tried many note apps for my Android (I use GrapheneOS) phone. Most of them either don't do what I want, are proprietary software, require Google Play services (I have the main profile on my phone de-googled) or are too bloated. I was never into mobile app development, as I'm not too fond of the complexity of the developer toolchains. I don't want to use Android Studio (as a NeoVim user), and I don't want to use Java or Kotlin. I want to use a language I know (and like) for mobile app development. Go would be one of those languages.

Enter Quick logger – a compact GUI Android (well, cross-platform due to Fyne) app I've crafted using Go and the nifty Fyne framework. With Fyne, the app can be compiled easily into an Android APK. As of this writing, this app's whole Go source code is only 75 lines short!! This little tool is designed for spontaneous moments, allowing me to quickly log my thoughts as plain text files on my Android phone. There are no fancy file formats. Just plain text!

https://codeberg.org/snonux/quicklogger

https://fyne.io

https://go.dev

There's no need to navigate complex menus or deal with sync issues. I jot down my Idea, and Quick logger saves it to a plain text file in a designated local folder on my phone. There is one text file per note (timestamp in the file name). Once logged, the file can't be edited anymore (it keeps it simple). If I want to correct or change a note, I simply write a new one. My notes are always small (usually one short sentence each), so there isn't the need for an edit functionality. I can edit them later on my actual computer if I want to.

With Syncthing, the note files are then synchronised to my home computer to my ~/Notes directory. From there, a small glue Raku script adds them to my Taskwarrior DB so that I can process them later (e.g. take action on that one Idea I had). That then will delete the original note files from my computer and also (through Syncthing) from my phone.

https://syncthing.net

https://raku.org

https://taskwarrior.org

Quick logger's user interface is as minimal as it gets. When I launch Quick logger, I'm greeted with a simple window where I can type plain text. Hit the "Log text" button, and voilà – the input is timestamped and saved as a file in my chosen directory. If I need to change the directory, the "Preferences" button brings up a window where I can set the notes folder and get back to logging.

For the code-savvy folks out there, Quick logger is a neat example of what you can achieve with Go and Fyne. It's a testament to building functional, cross-platform apps without getting bogged down in the nitty-gritty of platform-specific details. Thanks to Fyne, I am pleased with how easy it is to make mobile Android apps in Go.

My Android apps will never be polished, but they will get the job done, and this is precisely how I want them to be. Minimalistic but functional. I could spend more time polishing Quick logger, but my Quick logger app then may be the same as any other notes app out there (complicated or bloated).

I did have some issues with the app logo for Android, though. Android always showed the default app icon and not my custom icon whenever I used a custom AndroidManifest.xml for custom app storage permissions. Without a custom AndroidAmnifest.xml the app icon would be displayed under Android, but then the app would not have the MANAGE_EXTERNAL_STORAGE permission, which is required for Quick logger to write to a custom directory. I found a workaround, which I commented on here at Github:

https://github.com/fyne-io/fyne/issues/3077#issuecomment-1912697360

What worked however (app icon showing up) was to clone the fyne project, change the occurances of android.permission.INTERNET to android.permission.MANAGE_EXTERNAL_STORAGE (as these are all the changes I want in my custom android manifest) in the source tree, re-compile fyne. Now all works. I know, this is more of an hammer approach!

Hopefully, I won't need to use this workaround anymore. But for now, it is a fair tradeoff for what I am getting.

I hope this will inspire you to write your own small mobile apps in Go using the awesome Fyne framework! PS: The Quick logger logo was generated by ChatGPT.

E-Mail your comments to paul@nospam.buetow.org :-)

Other Go related posts are:

2024-03-03 A fine Fyne Android app for quickly logging ideas programmed in Go (You are currently reading this)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>From `babylon5.buetow.org` to `*.buetow.cloud`</title>

    <link href="gemini://foo.zone/gemfeed/2024-02-04-from-babylon5.buetow.org-to-.cloud.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-02-04-from-babylon5.buetow.org-to-.cloud.gmi</id>

    <updated>2024-02-04T00:50:50+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Recently, my employer sent me to a week-long AWS course. After the course, there wasn't any hands-on project I could dive into immediately, so I moved parts of my personal infrastructure to AWS to level up a bit through practical hands-on.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='from-babylon5buetoworg-to-buetowcloud'>From <span class='inlinecode'>babylon5.buetow.org</span> to <span class='inlinecode'>*.buetow.cloud</span></h1><br />

Published at 2024-02-04T00:50:50+02:00

Recently, my employer sent me to a week-long AWS course. After the course, there wasn't any hands-on project I could dive into immediately, so I moved parts of my personal infrastructure to AWS to level up a bit through practical hands-on.

So, I migrated all of my Docker-based self-hosted services to AWS. Usually, I am not a big fan of big cloud providers and instead use smaller hosters or indie providers and self-made solutions. However, I also must go with the times and try out technologies currently hot on the job market. I don't want to become the old man who yells at cloud :D

Before the migration, all those services were reachable through buetow.org-subdomains (Buetow is my last name) and ran on Docker containers on a single Rocky Linux 9 VM at Hetzner. And there was a Nginx reverse proxy with TLS offloading (with Let's Encrypt certificates). The Rocky Linux 9's hostname was babylon5.buetow.org (based on the Science Fiction series).

https://en.wikipedia.org/wiki/Babylon_5

The downsides of this setup were:

About the manual installation part: I could have used a configuration management system like Rexify, Puppet, etc. But I decided against it back in time, as setting up Docker containers isn't so complicated through simple start scripts. And it's only a single Linux box where a manual installation is less painful. However, regular backups (which Hetzner can do automatically for you) were a must.

The benefits of this setup were:

As pointed out, I only migrated the Docker-based self-hosted services (which run on the Babylon 5 Rocky Linux box) to AWS. Many self-hostable apps come with ready-to-use container images, making deploying them easy.

My other two OpenBSD VMs (blowfish.buetow.org, hosted at Hetzner, and fishfinger.buetow.org, hosted at OpenBSD Amsterdam) still run (and they will keep running) the following services:

It is all automated with Rex, aka Rexify. This OpenBSD setup is my "fun" or "for pleasure" setup. Whereas the Rocky Linux 9 one I always considered the "pratical means to the end"-setup to have 3rd party Docker containers up and running with as little work as possible.

(R)?ex, the friendly automation framework

KISS server monitoring with Gogios

Let's encrypt with OpenBSD and Rex

With AWS, I decided to get myself a new domain name, as I could fully separate my AWS setup from my conventional setup and give Route 53 as an authoritative DNS a spin.

I decided to automate everything with Terraform, as I wanted to learn to use it as it appears standard now in the job market.

All services are installed automatically to AWS ECS Fargate. ECS is AWS's Elastic Container Service, and Fargate automatically manages the underlying hardware infrastructure (e.g., how many CPUs, RAM, etc.) for me. So I don't have to bother about having enough EC2 instances to serve my demands, for example.

The authoritative DNS for the buetow.cloud domain is AWS Route 53. TLS certificates are free here at AWS and offloaded through the AWS Application Load Balancer. The LB acts as a proxy to the ECS container instances of the services. A few services I run in ECS Fargate also require the AWS Network Load Balancer.

All services require some persistent storage. For that, I use an encrypted EFS file system, automatically replicated across all AZs (availability zones) of my region of choice, eu-central-1.

In case of an AZ outage, I could re-deploy all the failed containers in another AZ, and all the data would still be there.

The EFS automatically gets backed up by AWS for me following their standard Backup schedule. The daily backups are kept for 30 days.

Domain registration, TLS certificate configuration and configuration of the EFS backup were quickly done through the AWS web interface. These were only one-off tasks, so they weren't fully automated through Terraform.

You can find all Terraform manifests here:

https://codeberg.org/snonux/terraform

Whereas:

And here, finally, is the list of all the container apps my Terraform manifests deploy. The FQDNs here may not be reachable. I spin them up only on demand (for cost reasons). All services are fully dual-stacked (IPv4 & IPv6).

Miniflux is a minimalist and opinionated feed reader. With the move to AWS, I also retired my bloated instance of NextCloud. So, with Miniflux, I retired from NextCloud News.

Miniflux requires two ECS containers. One is the Miniflux app, and the other is the PostgreSQL DB.

https://miniflux.app/

Audiobookshelf was the first Docker app I installed. It is a Self-hosted audiobook and podcast server. It comes with a neat web interface, and there is also an Android app available, which works also in offline mode. This is great, as I only have the ECS instance sometimes running for cost savings.

With Audiobookshelf, I replaced my former Audible subscription and my separate Podcast app. For Podcast synchronisation I used to use the Gpodder NextCloud sync app. But that one I retired now with Audiobookshelf as well :-)

https://www.audiobookshelf.org

Syncthing is a continuous file synchronisation program. In real-time, it synchronises files between two or more computers, safely protected from prying eyes. Your data is your own, and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.

With Syncthing, I retired my old NextCloud Files and file sync client on all my devices. I also quit my NextCloud Notes setup. All my Notes are now plain Markdown files in a Notes directory. On Android, I can edit them with any text or Markdown editor (e.g. Obsidian), and they will be synchronised via Syncthing to my other computers, both forward and back.

I use Syncthing to synchronise some of my Phone's data (e.g. Notes, Pictures and other documents). Initially, I synced all of my pictures, videos, etc., with AWS. But that was pretty expensive. So for now, I use it only whilst travelling. Otherwise, I will use my Syncthing instance here on my LAN (I have a cheap cloud backup in AWS S3 Glacier Deep Archive, but that's for another blog post).

https://syncthing.net/

Radicale is an excellent minimalist WebDAV calendar and contact synchronisation server. It was good enough to replace my NextCloud Calendar and NextCloud Contacts setup. Unfortunately, there wasn't a ready-to-use Docker image. So, I created my own.

On Android, it works great together with the DAVx5 client for synchronisation.

https://radicale.org/

https://codeberg.org/snonux/docker-radicale-server

https://www.davx5.com/

Wallabag is a self-hostable "save now - read later" service, and it also comes with an Android app which also has an offline mode. Think of Getpocket, but open-source!

https://wallabag.org/

https://github.com/wallabag/wallabag

Anki is a great (the greatest) flash-card learning program. I am currently learning Bulgarian as my 3rd language. There is also an Android app that has an offline mode, and advanced users can also self-host the server anki-sync-server. For some reason (not going into the details here), I had to build my own Docker image for the server.

https://apps.ankiweb.net/

https://codeberg.org/snonux/docker-anki-sync-server

Vaultwarden is an alternative implementation of the Bitwarden server API written in Rust and compatible with upstream Bitwarden clients, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal. So, this is a great password manager server which can be used with any Bitwarden Android app.

I currently don't use it, but I may in the future. I made it available in my ECS Fargate setup anyway for now.

https://github.com/dani-garcia/vaultwarden

I currently use geheim, a Ruby command line tool I wrote, as my current password manager. You can read a little bit about it here under "More":

Sweating the small stuff

This is a tiny ARM-based Amazon Linux EC2 instance, which I sometimes spin up for investigation or manual work on my EFS file system in AWS.

I have learned a lot about AWS and Terraform during this migration. This was actually my first AWS hands-on project with practical use.

All of this was not particularly difficult (but at times a bit confusing). I see the use of Terraform managing more extensive infrastructures (it was even helpful for my small setup here). At least I know now what all the buzz is about :-). I don't think Terraform's HCL is a nice language. It get's it's job done, but it could be more elegant IMHO.

Deploying updates to AWS are much easier, and some of the manual maintenance burdens of my Rocky Linux 9 VM are no longer needed. So I will have more time for other projects!

Will I keep it in the cloud? I don't know yet. But maybe I won't renew the buetow.cloud domain and instead will use .cloud.buetow.org or .aws.buetow.org subdomains.

Will the AWS setup be cheaper than my old Rocky Linux setup? It might be more affordable as I only turn ECS and the load balancers on or off on-demand. Time will tell! The first forecasts suggest that it will be around the same costs.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>One reason why I love OpenBSD</title>

    <link href="gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-01-13-one-reason-why-i-love-openbsd.gmi</id>

    <updated>2024-01-13T22:55:33+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>HKISSFISHKISSFISHKISSFISHKISSFISH    KISS</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='one-reason-why-i-love-openbsd'>One reason why I love OpenBSD</h1><br />

Published at 2024-01-13T22:55:33+02:00

       FISHKISSFISHKIS               

   SFISHKISSFISHKISSFISH            F

ISHK   ISSFISHKISSFISHKISS         FI

SHKISS FISHKISSFISHKISSFISS FIS

HKISSFISHKISSFISHKISSFISHKISSFISH KISS

FISHKISSFISHKISSFISHKISSFISHKISS FISHK

  SSFISHKISSFISHKISSFISHKISSFISHKISSF

ISHKISSFISHKISSFISHKISSFISHKISSF ISHKI

SSFISHKISSFISHKISSFISHKISSFISHKIS SFIS

HKISSFISHKISSFISHKISSFISHKISS FIS

HKISSFISHKISSFISHKISSFISHK         IS

   SFISHKISSFISHKISSFISH            K

     ISSFISHKISSFISHK               

I just upgraded my OpenBSD's from 7.3 to 7.4 by following the unattended upgrade guide:

https://www.openbsd.org/faq/upgrade74.html

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

$ doas sysupgrade # Update all binaries (including Kernel)

sysupgrade downloaded and upgraded to the next release and rebooted the system. After the reboot, I run:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

$ doas pkg_add -u # Update all packages

$ doas reboot # Just in case, reboot one more time

That's it! Took me around 5 minutes in total! No issues, only these few comands, only 5 minutes! It just works! No problems, no conflicts, no tons (actually none) config file merge conflicts.

I followed the same procedure the previous times and never encountered any difficulties with any OpenBSD upgrades.

I have seen upgrades of other Operating Systems either take a long time or break the system (which takes manual steps to repair). That's just one of many reasons why I love OpenBSD! There appear never to be any problems. It just gets its job done!

The OpenBSD Project

BTW: are you looking for an opinionated OpenBSD VM hoster? OpenBSD Amsterdam may be for you. They rock (I am having a VM there, too)!

https://openbsd.amsterdam

E-Mail your comments to paul@nospam.buetow.org :-)

Other *BSD related posts are:

2016-04-09 Jails and ZFS with Puppet on FreeBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2022-10-30 Installing DTail on OpenBSD

2024-01-13 One reason why I love OpenBSD (You are currently reading this)

2024-04-01 KISS high-availability with OpenBSD

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Site Reliability Engineering - Part 3: On-Call Culture</title>

    <link href="gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi" />

    <id>gemini://foo.zone/gemfeed/2024-01-09-site-reliability-engineering-part-3.gmi</id>

    <updated>2024-01-09T18:35:48+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Welcome to Part 3 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='site-reliability-engineering---part-3-on-call-culture'>Site Reliability Engineering - Part 3: On-Call Culture</h1><br />

Published at 2024-01-09T18:35:48+02:00

Welcome to Part 3 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture (You are currently reading this)

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

                ..--""""----..                 

             .-"   ..--""""--.j-.              

          .-"   .-"        .--.""--..          

       .-"   .-"       ..--"-. \/    ;         

    .-"   .-"_.--..--""  ..--&#39;  "-.  :         

  .&#39;    .&#39;  /  `. \..--"" __ _     \ ;         

 :.__.-"    \  /        .&#39; ( )"-.   Y          

 ;           ;:        ( )     ( ).  \         

.': /:: : \ \

.'.-"._ _.-" ; ; ( ) .-. ( ) \

" `.""" .j" : : \ ; ; \

bug /"""""/     ;      ( )    "" :.( )   \     

   /\    /      :       \         \`.:  _ \    

  :  `. /        ;       `( )     (\/ :" \ \   

   \   `.        :         "-.(_)_.&#39;   t-&#39;  ;  

    \    `.       ;                    ..--":  

     `.    `.     :              ..--""     :  

       `.    "-.   ;       ..--""           ;  

         `.     "-.:_..--""            ..--"   

           `.      :             ..--""        

             "-.   :       ..--""              

                "-.;_..--""                    

Site Reliability Engineering is all about keeping systems reliable, but we often forget how important the human side is. A healthy on-call culture is just as crucial as any technical fix. The well-being of the engineers really matters.

First off, a healthy on-call rotation is about more than just handling incidents. It's about creating a supportive ecosystem. This means cutting down on pain points, offering mentorship, quickly iterating on processes, and making sure engineers have the right tools. But there's a catch—engineers need to be willing to learn. Especially in on-call rotations where SREs work with Software Engineers or QA Engineers, it can be tough to get everyone motivated. QA Engineers want to test, Software Engineers want to build new features; they don’t want to deal with production issues. This can be really frustrating for the SREs trying to mentor them.

Plus, measuring a good on-call experience isn't always clear-cut. You might think fewer pages mean a better on-call setup—and yeah, no one wants to get paged after hours—but it's not just about the number of pages. Trust, ownership, accountability, and solid communication are what really matter.

A key part is giving feedback about the on-call experience to keep learning and improving. If alerts are mostly noise, they need to be tweaked or even ditched. If alerts are helpful, can we automate the repetitive tasks? If there are knowledge gaps, is the documentation lacking? Regular retrospectives ensure that the systems get better over time and the on-call experience improves for the engineers.

Getting new team members ready for on-call duties is super important for keeping systems reliable and efficient. This means giving them the knowledge, tools, and support they need to handle incidents with confidence. It starts with a rundown of the system architecture and common issues, then training on monitoring tools, alerting systems, and incident response protocols. Watching experienced on-call engineers in action can provide some hands-on learning. Too often, though, new engineers get thrown into the deep end without proper onboarding because the more experienced engineers are too busy dealing with ongoing production issues.

A culture where everyone's always on and alert can cause burnout. Engineers need to know their limits, take breaks, and ask for help when they need it. This isn't just about personal health; a burnt-out engineer can drag down the whole team and the systems they manage. A good on-call culture keeps systems running while making sure engineers are happy, healthy, and supported. Experienced engineers should take the time to mentor juniors, but junior engineers should also stay engaged, investigate issues, and learn new things on their own.

For junior engineers, it's tempting to always ask the experts for help whenever something goes wrong. While that might seem reasonable, constantly handing out solutions doesn't scale—there are endless ways for production systems to break. So, every engineer needs to learn how to debug, troubleshoot, and resolve incidents on their own. The experts should be there for guidance and can step in when a junior gets really stuck, but they also need to give space for less experienced engineers to grow and learn.

A blameless on-call culture is essential for creating a safe and collaborative environment where engineers can handle incidents without worrying about getting blamed. It recognizes that mistakes are just part of learning and innovating. When people know they won’t be punished for errors, they’re more likely to talk openly about what went wrong, which helps the whole team learn and improve. Plus, a blameless culture boosts psychological safety, job satisfaction, and reduces burnout, keeping everyone committed and engaged.

Mistakes are gonna happen, which is why having a blameless on-call culture is so important.

Continue with the fourth part of this series:

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Bash Golf Part 3</title>

    <link href="gemini://foo.zone/gemfeed/2023-12-10-bash-golf-part-3.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-12-10-bash-golf-part-3.gmi</id>

    <updated>2023-12-10T11:35:54+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='bash-golf-part-3'>Bash Golf Part 3</h1><br />

Published at 2023-12-10T11:35:54+02:00

This is the third blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time.

2021-11-29 Bash Golf Part 1

2022-01-01 Bash Golf Part 2

2023-12-10 Bash Golf Part 3 (You are currently reading this)

&#39;\       &#39;\        &#39;\                   .  .          |&gt;18&gt;&gt;

  \        \         \              .         &#39; .     |

 O&gt;&gt;      O&gt;&gt;       O&gt;&gt;         .                 &#39;o  |

  \       .\. ..    .\. ..   .                        |

  /\    .  /\     .  /\    . .                        |

 / /   .  / /  .&#39;.  / /  .&#39;    .                      |

jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

                    Art by Joan Stark, mod. by Paul Buetow

FUNCNAME is an array you are looking for a way to dynamically determine the name of the current function (which could be considered the callee in the context of its own execution), you can use the special variable FUNCNAME. This is an array variable that contains the names of all shell functions currently in the execution call stack. The element FUNCNAME[0] holds the name of the currently executing function, FUNCNAME[1] the name of the function that called that, and so on.

This is particularly useful for logging when you want to include the callee function in the log output. E.g. look at this log helper:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

log () {

<b><u><font color="#000000">local</font></u></b> -r level=<font color="#808080">"$1"</font>; <b><u><font color="#000000">shift</font></u></b>

<b><u><font color="#000000">local</font></u></b> -r message=<font color="#808080">"$1"</font>; <b><u><font color="#000000">shift</font></u></b>

<b><u><font color="#000000">local</font></u></b> -i pid=<font color="#808080">"$$"</font>

<b><u><font color="#000000">local</font></u></b> -r callee=${FUNCNAME[1]}

<b><u><font color="#000000">local</font></u></b> -r stamp=$(date +%Y%m%d-%H%M%S)

echo <font color="#808080">"$level|$stamp|$pid|$callee|$message"</font> &gt;&amp;<font color="#000000">2</font>

}

at_home_friday_evening () {

log INFO <font color="#808080">'One Peperoni Pizza, please'</font>

}

at_home_friday_evening

The output is as follows:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

INFO|20231210-082732|123002|at_home_friday_evening|One Peperoni Pizza, please

This one may be widely known already, but I am including it here as I found a cute image illustrating it. But to break :(){ :|:& };: down:

Let's break down the function body :|:&:

So, it's a fork bomb. If you run it, your computer will run out of resources eventually. (Modern Linux distributions could have reasonable limits configured for your login session, so it won't bring down your whole system anymore unless you run it as root!)

And here is the cute illustration:

Bash defines variables as it is interpreting the code. The same applies to function declarations. Let's consider this code:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

outer() {

inner() {

echo <font color="#808080">'Intel inside!'</font>

}

inner

}

inner

outer

inner

And let's execute it:

❯ ./inner.sh

/tmp/inner.sh: line 10: inner: command not found

Intel inside!

Intel inside!

What happened? The first time inner was called, it wasn't defined yet. That only happens after the outer run. Note that inner will still be globally defined. But functions can be declared multiple times (the last version wins):

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

outer1() {

inner() {

echo <font color="#808080">'Intel inside!'</font>

}

inner

}

outer2() {

inner() {

echo <font color="#808080">'Wintel inside!'</font>

}

inner

}

outer1

inner

outer2

inner

And let's run it:

❯ ./inner2.sh

Intel inside!

Intel inside!

Wintel inside!

Wintel inside!

Have you ever wondered how to execute a shell function in parallel through xargs? The problem is that this won't work:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

some_expensive_operations() {

echo "Doing expensive operations with '$1' from pid $$"

}

for i in {0..9}; do echo $i; done \

| xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

We try here to run ten parallel processes; each of them should run the some_expensive_operations function with a different argument. The arguments are provided to xargs through STDIN one per line. When executed, we get this:

❯ ./xargs.sh

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

bash: line 1: some_expensive_operations: command not found

There's an easy solution for this. Just export the function! It will then be magically available in any sub-shell!

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

some_expensive_operations() {

echo "Doing expensive operations with '$1' from pid $$"

}

export -f some_expensive_operations

for i in {0..9}; do echo $i; done \

| xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

When we run this now, we get:

❯ ./xargs.sh

Doing expensive operations with '0' from pid 132831

Doing expensive operations with '1' from pid 132832

Doing expensive operations with '2' from pid 132833

Doing expensive operations with '3' from pid 132834

Doing expensive operations with '4' from pid 132835

Doing expensive operations with '5' from pid 132836

Doing expensive operations with '6' from pid 132837

Doing expensive operations with '7' from pid 132838

Doing expensive operations with '8' from pid 132839

Doing expensive operations with '9' from pid 132840

If some_expensive_function would call another function, the other function must also be exported. Otherwise, there will be a runtime error again. E.g., this won't work:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

some_other_function() {

echo "$1"

}

some_expensive_operations() {

some_other_function "Doing expensive operations with '$1' from pid $$"

}

export -f some_expensive_operations

for i in {0..9}; do echo $i; done \

| xargs -P10 -I{} bash -c 'some_expensive_operations "{}"'

... because some_other_function isn't exported! You will also need to add an export -f some_other_function!

You may know that local is how to declare local variables in a function. Most don't know that those variables actually have dynamic scope. Let's consider the following example:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

foo() {

local foo=bar # Declare local/dynamic variable

bar

echo "$foo"

}

bar() {

echo "$foo"

foo=baz

}

foo=foo # Declare global variable

foo # Call function foo

echo "$foo"

Let's pause a minute. What do you think the output would be?

Let's run it:

❯ ./dynamic.sh

bar

baz

foo

What happened? The variable foo (declared with local) is available in the function it was declared in and in all other functions down the call stack! We can even modify the value of foo, and the change will be visible up the call stack. It's not a global variable; on the last line, echo "$foo" echoes the global variable content.

Consider all variants here more or less equivalent:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

declare -r foo=foo

declare -r bar=bar

if [ "$foo" = foo ]; then

if [ "$bar" = bar ]; then

echo ok1

fi

fi

if [ "$foo" = foo ] && [ "$bar" == bar ]; then

echo ok2a

fi

[ "$foo" = foo ] && [ "$bar" == bar ] && echo ok2b

if [[ "$foo" = foo && "$bar" == bar ]]; then

echo ok3a

fi

[[ "$foo" = foo && "$bar" == bar ]] && echo ok3b

if test "$foo" = foo && test "$bar" = bar; then

echo ok4a

fi

test "$foo" = foo && test "$bar" = bar && echo ok4b

The output we get is:

❯ ./if.sh

ok1

ok2a

ok2b

ok3a

ok3b

ok4a

ok4b

You all know how to comment. Put a # in front of it. You could use multiple single-line comments or abuse heredocs and redirect it to the : no-op command to emulate multi-line comments.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

# Single line comment

# These are two single line

# comments one after another

: <<COMMENT

This is another way a

multi line comment

could be written!

COMMENT

I will not demonstrate the execution of this script, as it won't print anything! It's obviously not the most pretty way of commenting on your code, but it could sometimes be handy!

Consider this script:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

echo foo

echo echo baz >> $0

echo bar

When it is run, it will do:

❯ ./if.sh

foo

bar

baz

❯ cat if.sh

!/usr/bin/env bash

echo foo

echo echo baz >> $0

echo bar

echo baz

So what happened? The echo baz line was appended to the script while it was still executed! And the interpreter also picked it up! It tells us that Bash evaluates each line as it encounters it. This can lead to nasty side effects when editing the script while it is still being executed! You should always keep this in mind!

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-05-16 Personal Bash coding style guide

2021-06-05 Gemtexter - One Bash script to rule it all

2021-11-29 Bash Golf Part 1

2022-01-01 Bash Golf Part 2

2023-12-10 Bash Golf Part 3 (You are currently reading this)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Site Reliability Engineering - Part 2: Operational Balance</title>

    <link href="gemini://foo.zone/gemfeed/2023-11-19-site-reliability-engineering-part-2.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-11-19-site-reliability-engineering-part-2.gmi</id>

    <updated>2023-11-19T00:18:18+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='site-reliability-engineering---part-2-operational-balance'>Site Reliability Engineering - Part 2: Operational Balance</h1><br />

Published at 2023-11-19T00:18:18+03:00

This is the second part of my Site Reliability Engineering (SRE) series. I am currently employed as a Site Reliability Engineer and will try to share what SRE is about in this blog series.

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance (You are currently reading this)

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⣾⣷⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

⠀⠀⠀⠀⣾⠿⠿⠿⠶⠾⠿⠿⣿⣿⣿⣿⣿⣿⠿⠿⠶⠶⠿⠿⠿⣷⠀⠀⠀⠀

⠀⠀⠀⣸⢿⣆⠀⠀⠀⠀⠀⠀⠀⠙⢿⡿⠉⠀⠀⠀⠀⠀⠀⠀⣸⣿⡆⠀⠀⠀

⠀⠀⢠⡟⠀⢻⣆⠀⠀⠀⠀⠀⠀⠀⣾⣧⠀⠀⠀⠀⠀⠀⠀⣰⡟⠀⢻⡄⠀⠀

⠀⢀⣾⠃⠀⠀⢿⡄⠀⠀⠀⠀⠀⢠⣿⣿⡀⠀⠀⠀⠀⠀⢠⡿⠀⠀⠘⣷⡀⠀

⠀⣼⣏⣀⣀⣀⣈⣿⡀⠀⠀⠀⠀⣸⣿⣿⡇⠀⠀⠀⠀⢀⣿⣃⣀⣀⣀⣸⣧⠀

⠀⢻⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⣿⣿⣿⣿⠀⠀⠀⠀⠈⢿⣿⣿⣿⣿⣿⡿⠀

⠀⠀⠉⠛⠛⠛⠋⠁⠀⠀⠀⠀⢸⣿⣿⣿⣿⡆⠀⠀⠀⠀⠈⠙⠛⠛⠛⠉⠀⠀

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⣿⣿⣿⣿⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

⠀⠀⠀⠀⠀⠀⠴⠶⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠶⠦⠀⠀

Site Reliability Engineering is more than just a bunch of best practices or methods. It's a guiding light for engineering teams, helping them navigate the tricky waters of modern software development and system management.

In the world of software production, there are two big forces that often clash: the push for fast feature releases (velocity) and the need for reliable systems. Traditionally, moving faster meant more risk. SRE helps balance these opposing goals with things like error budgets and SLIs/SLOs. These tools give teams a clear way to measure how much they can push changes without hurting system health. So, the error budget becomes a balancing act, helping teams trade off between innovation and reliability.

Finding the right balance in SRE means juggling operations and coding. Ideally, engineers should split their time 50/50 between these tasks. This isn't just a random rule; it highlights how much SRE values both maintaining smooth operations and driving innovation. This way, SREs not only handle today's problems but also prepare for tomorrow's challenges.

But not all operations tasks are the same. SRE makes a clear distinction between "ops work" and "toil." Ops work is essential for maintaining systems and adds value, while toil is the repetitive, boring stuff that doesn’t. It's super important to recognize and minimize toil because a culture that lets engineers get bogged down in it will kill innovation and growth. The way an organization handles toil says a lot about its operational health and commitment to balance.

A key part of finding operational balance is the tools and processes that SREs use. Great monitoring and observability tools, especially those that can handle lots of complex data, are essential. This isn’t just about having the right tech—it shows that the organization values proactive problem-solving. With systems that can spot potential issues early, SREs can keep things stable while still pushing forward.

Operational balance isn't just about tech or processes; it's also about people. The well-being of on-call engineers is just as important as the health of the services they manage. Doing postmortems after incidents, having continuous feedback loops, and identifying gaps in tools, skills, or resources all help make sure the human side of operations gets the attention it deserves.

In the end, finding operational balance in SRE is an ongoing journey, not a one-time thing. Companies need to keep reassessing their practices, tools, and especially their culture. When they get this balance right, they can keep innovating without sacrificing the reliability of their systems, leading to long-term success.

That all sounds pretty idealistic. The reality is that getting the perfect balance is really tough. No system is ever going to be perfect. But hey, we should still strive for it!

Continue with the third part of this series:

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'Mind Management' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2023-11-11-mind-management-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-11-11-mind-management-book-notes.gmi</id>

    <updated>2023-11-11T22:21:47+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'Mind Management' by David Kadavy. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='mind-management-book-notes'>"Mind Management" book notes</h1><br />

Published at 2023-11-11T22:21:47+02:00

These are my personal takeaways after reading "Mind Management" by David Kadavy. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

Productivity isn't about time management - it's about mind management. When you put a lot of effort into something, there are:

If we do more things in less time and use all possible slots, speed read, etc., we are more productive. But in reality, that's not the entire truth. You also exchange one thing against everything else.... You cut out too much from your actual life.

...keep it.

Ask yourself: what is my mood now? We never have the energy to do anything, so the better strategy is to follow your current mode and energy. E.g.:

The morning without coffee is a gift for creativity, but you often get distracted. Minimize distractions, too. I have no window to stare out but a plain blank wall.

We need to try many different combinations. Limiting ourselves and trying too hard makes us frustrated and burn out. Creativity requires many iterations.

I can only work according to my available brain power.

I can also change my mood according to what needs improvement. Just imagine the last time you were in that mood and then try to get into it. It can take several tries to hit a working mood. Try to replicate that mental state. This can also be by location or by another habit, e.g. by a beer.

Once you are in a mental state, don't try to change it. It will take a while for your brain to switch to a completely different state.

Week of want. For a week, only do what you want and not what you must do. Your ideas will get much more expansive.

It gives you pleasure and is in a good mood. This increases creativity if you do what you want to do.

Minds work better in sprints and not in marathons. Have a weekly plan, not a daily one.

Organize by mental state. In the time management context, the mental state doesn't exist. You schedule as many things as possible by project. In the mind management context, mental state is everything. You could prepare by mental state and not by assignment.

You could schedule exploratory tasks when you are under grief. Sound systems should create slack for creativity. Plan only for a few minutes.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes (You are currently reading this)

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>KISS static web photo albums with `photoalbum.sh`</title>

    <link href="gemini://foo.zone/gemfeed/2023-10-29-kiss-static-web-photo-albums-with-photoalbum.sh.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-10-29-kiss-static-web-photo-albums-with-photoalbum.sh.gmi</id>

    <updated>2023-10-29T22:25:04+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer).</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='kiss-static-web-photo-albums-with-photoalbumsh'>KISS static web photo albums with <span class='inlinecode'>photoalbum.sh</span></h1><br />

Published at 2023-10-29T22:25:04+02:00

Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer).

I'm not particularly eager to use any photo social sharing platforms such as Flickr, 500px (I used them regularly in the past), etc., anymore. I value self-hosting, DIY and privacy (nobody should data mine my photos), and no third party should have any rights to my pictures.

I value KISS (keep it simple and stupid) and simplicity. All that's required for a web photo album is some simple HTML and spice it up with CSS. No need for JavaScript, no need for a complex dynamic website.

     ___        .---------.._

____!fsc!....-' .g8888888p. '-------.....

.' // .g8: :8p..---....___ &#39;.

| foo.zone // () d88: :88b|==========! !|

| // 888: :888|==========| !|

|___ \_______'T88888888888P''----------'//|

| \ """"""""""""""""""""""""""""""""""/ |

| !...._____ .="""=. .[] ____...! |

| / ! .g$p. ! .[] : |

| ! : $$$$$ : .[] : |

| !irregular.ninja ! 'T$P' ! .[] '.|

| __ "=._.=" .() __ |

|.--' '----._______________________.----' '--.|

'._____________________________________________.'

photoalbum.sh is a minimal Bash (Bourne Again Shell) script for Unix-like operating systems (such as Linux) to generate static web photo albums. The resulting static photo album is pure HTML+CSS (without any JavaScript!). It is specially designed to be as simple as possible.

Installation is straightforward. All required is a recent version of GNU Bash, GNU Make, Git and ImageMagick. On Fedora, the dependencies are installed with:

% sudo dnf install -y ImageMagick make git

Now, clone, make and install the script:

% git clone https://codeberg.org/snonux/photoalbum

Cloning into 'photoalbum'...

remote: Enumerating objects: 1624, done.

remote: Total 1624 (delta 0), reused 0 (delta 0), pack-reused 1624

Receiving objects: 100% (1624/1624), 193.36 KiB | 1.49 MiB/s, done.

Resolving deltas: 100% (1227/1227), done.

% cd photoalbum

/home/paul/photoalbum

% make

cut -d' ' -f2 changelog | head -n 1 | sed 's/(//;s/)//' > .version

test ! -d ./bin && mkdir ./bin || exit 0

sed "s/PHOTOALBUMVERSION/$(cat .version)/" src/photoalbum.sh > ./bin/photoalbum

chmod 0755 ./bin/photoalbum

% sudo make install

test ! -d /usr/bin && mkdir -p /usr/bin || exit 0

cp ./bin/* /usr/bin

test ! -d /usr/share/photoalbum/templates && mkdir -p /usr/share/photoalbum/templates || exit 0

cp -R ./share/templates /usr/share/photoalbum/

test ! -d /etc/default && mkdir -p /etc/default || exit 0

cp ./src/photoalbum.default.conf /etc/default/photoalbum

You should now have the photoalbum command in your $PATH. But wait to use it! First, it needs to be set up!

% photoalbum version

This is Photoalbum Version 0.5.1

Now, it's time to set up the Irregular Ninja static web photo album (or any other web photo album you may be setting up!)! Create a directory (here: irregular.ninja for the Irregular Ninja Photo site - or any oter sub-directory reflecting your album's name), and inside of that directory, create an incoming directory. The incoming directory. Copy all photos to be part of the album there.

% mkdir irregular.ninja

% cd irregular.ninja

% # cp -Rpv ~/Photos/your-photos ./incoming

In this example, I am skipping the cp ... part as I intend to use an alternative incoming directory, as you will see later in the configuration file.

The general usage of potoalbum is as follows:

photoalbum clean|generate|version [rcfile] photoalbum

photoalbum makemake

Whereas:

So what we will do next is to run the following inside of the irregular.ninja/ directory; it will generate a Makefile and a configuration file photoalbumrc containing a few configurable options:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

You may now customize ./photoalbumrc and run make

% cat Makefile

all:

photoalbum generate photoalbumrc

clean:

photoalbum clean photoalbumrc

% cat photoalbumrc

# The title of the photoalbum

TITLE='A simple Photoalbum'

# Thumbnail height geometry

THUMBHEIGHT=300

# Normal geometry height (when viewing photo). Uncomment, to keep original size.

HEIGHT=1200

# Max previews per page.

MAXPREVIEWS=40

# Randomly shuffle all previews.

# SHUFFLE=yes

# Diverse directories, need to be full paths, not relative!

INCOMING_DIR=$(pwd)/incoming

DIST_DIR=$(pwd)/dist

TEMPLATE_DIR=/usr/share/photoalbum/templates/default

#TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal

# Includes a .tar of the incoming dir in the dist, can be yes or no

TARBALL_INCLUDE=yes

TARBALL_SUFFIX=.tar

TAR_OPTS='-c'

# Some debugging options

#set -e

#set -x

In the case for irregular.ninja, I changed the defaults to the following:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

+++ photoalbumrc.new 2023-06-04 10:40:08.030994440 +0300

@@ -1,23 +1,24 @@

The title of the photoalbum

-TITLE='A simple Photoalbum'

+TITLE='Irregular.Ninja'

Thumbnail height geometry

-THUMBHEIGHT=300

+THUMBHEIGHT=400

Normal geometry height (when viewing photo). Uncomment, to keep original size.

-HEIGHT=1200

+HEIGHT=1800

Max previews per page.

MAXPREVIEWS=40

-# Randomly shuffle all previews.

-# SHUFFLE=yes

+# Randomly shuffle

+SHUFFLE=yes

Diverse directories, need to be full paths, not relative!

-INCOMING_DIR=$(pwd)/incoming

+INCOMING_DIR=~/Nextcloud/Photos/irregular.ninja

DIST_DIR=$(pwd)/dist

TEMPLATE_DIR=/usr/share/photoalbum/templates/default

#TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal

Includes a .tar of the incoming dir in the dist, can be yes or no

-TARBALL_INCLUDE=yes

+TARBALL_INCLUDE=no

TARBALL_SUFFIX=.tar

TAR_OPTS='-c'

So I changed the album title, adjusted some image and thumbnail dimensions, and I want all images to be randomly shuffled every time the album is generated! I also have all my photos in my Nextcloud Photo directory and don't want to copy them to the local incoming directory. Also, a tarball containing the whole album as a download isn't provided.

Let's generate it. Depending on the image sizes and count, the following step may take a while.

% make

photoalbum generate photoalbumrc

Processing 1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg to /home/paul/irregular.ninja/dist/photos/1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg

Processing 11271242324.jpg to /home/paul/irregular.ninja/dist/photos/11271242324.jpg

Processing 11271306683.jpg to /home/paul/irregular.ninja/dist/photos/11271306683.jpg

Processing 13950707932.jpg to /home/paul/irregular.ninja/dist/photos/13950707932.jpg

Processing 14077406487.jpg to /home/paul/irregular.ninja/dist/photos/14077406487.jpg

Processing 14859380100.jpg to /home/paul/irregular.ninja/dist/photos/14859380100.jpg

Processing 14869239578.jpg to /home/paul/irregular.ninja/dist/photos/14869239578.jpg

Processing 14879132910.jpg to /home/paul/irregular.ninja/dist/photos/14879132910.jpg

.

.

.

Generating /home/paul/irregular.ninja/dist/html/7-4.html

Creating thumb /home/paul/irregular.ninja/dist/thumbs/20211130_091051.jpg

Creating blur /home/paul/irregular.ninja/dist/blurs/20211130_091051.jpg

Generating /home/paul/irregular.ninja/dist/html/page-7.html

Generating /home/paul/irregular.ninja/dist/html/7-5.html

Generating /home/paul/irregular.ninja/dist/html/7-5.html

Generating /home/paul/irregular.ninja/dist/html/7-5.html

Creating thumb /home/paul/irregular.ninja/dist/thumbs/DSCF0188.JPG

Creating blur /home/paul/irregular.ninja/dist/blurs/DSCF0188.JPG

Generating /home/paul/irregular.ninja/dist/html/page-7.html

Generating /home/paul/irregular.ninja/dist/html/7-6.html

Generating /home/paul/irregular.ninja/dist/html/7-6.html

Generating /home/paul/irregular.ninja/dist/html/7-6.html

Creating thumb /home/paul/irregular.ninja/dist/thumbs/P3500897-01.jpg

Creating blur /home/paul/irregular.ninja/dist/blurs/P3500897-01.jpg

.

.

.

Generating /home/paul/irregular.ninja/dist/html/8-0.html

Generating /home/paul/irregular.ninja/dist/html/8-41.html

Generating /home/paul/irregular.ninja/dist/html/9-0.html

Generating /home/paul/irregular.ninja/dist/html/9-41.html

Generating /home/paul/irregular.ninja/dist/html/index.html

Generating /home/paul/irregular.ninja/dist/.//index.html

The result will be in the distribution directory ./dist. This directory is publishable to the inter-web:

% ls ./dist

blurs html index.html photos thumbs

I usually do that via rsync to my web server (I use OpenBSD with the standard httpd web server, btw.), which is as simple as:

% rsync --delete -av ./dist/. admin@blowfish.buetow.org:/var/www/htdocs/irregular.ninja/

Have a look at the end result here:

https://irregular.ninja

PS: There's also a server-side synchronisation script mirroring the same content to another server for high availability reasons (out of scope for this blog post).

A simple make clean will clean up the ./dist directory and all other (if any) temp files created.

Poke around in this source directory. You will find a bunch of Bash-HTML template files. You could tweak them to your liking.

A decent looking (in my opinion, at least) in less than 500 (273 as of this writing, to be precise) lines of Bash code and with minimal dependencies; what more do you want? How many LOCs would this be in Raku with the same functionality (can it be sub-100?).

Also, I like the CSS effects which I recently added. In particular, for the Irregular Ninja site, I randomly shuffled the CSS effects you see. The background blur images are the same but rotated 180 degrees and blurred out.

photoalbum.sh source code on Codeberg.

E-Mail your comments to paul@nospam.buetow.org :-)

Other Bash and KISS-related posts are:

2021-05-16 Personal Bash coding style guide

2021-06-05 Gemtexter - One Bash script to rule it all

2021-09-12 Keep it simple and stupid

2021-11-29 Bash Golf Part 1

2022-01-01 Bash Golf Part 2

2023-06-01 KISS server monitoring with Gogios

2023-10-29 KISS static web photo albums with photoalbum.sh (You are currently reading this)

2023-12-10 Bash Golf Part 3

2024-04-01 KISS high-availability with OpenBSD

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>DTail usage examples</title>

    <link href="gemini://foo.zone/gemfeed/2023-09-25-dtail-usage-examples.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-09-25-dtail-usage-examples.gmi</id>

    <updated>2023-09-25T14:57:42+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Hey there. As I am pretty busy this month personally (I am now on Paternity Leave) and as I still want to post once monthly, the blog post of this month will only be some DTail usage examples. They're from the DTail documentation, but not all readers of my blog may be aware of those!</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='dtail-usage-examples'>DTail usage examples</h1><br />

Published at 2023-09-25T14:57:42+03:00

Hey there. As I am pretty busy this month personally (I am now on Paternity Leave) and as I still want to post once monthly, the blog post of this month will only be some DTail usage examples. They're from the DTail documentation, but not all readers of my blog may be aware of those!

DTail is a distributed DevOps tool for tailing, grepping, catting logs and other text files on many remote machines at once which I programmed in Go.

https://dtail.dev

                          ,_---~~~~~----._

                    _,,_,*^____      _____``*g*\"*,

____ _____ _ _ / __/ /' ^. / \ ^@q f

| _ _ | () | @f | ((@| |@)) l 0 _/

| | | || |/ | | | \/ ~___ / __ _____/ \

| || || | (| | | | | l__l I

|/ ||_,||| } [_____] I

                    ]            | | |            |

                    ]             ~ ~             |

                    |   Let&#39;s tail those logs!   |

                     |                           |

DTail consists out of a server and several client binaries. In this post, I am showcasing their use!

The following example demonstrates how to follow logs of several servers at once. The server list is provided as a flat text file. The example filters all records containing the string INFO. Any other Go compatible regular expression can also be used instead of INFO.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Hint: you can also provide a comma separated server list, e.g.: servers server1.example.org,server2.example.org:PORT,...

Hint: You can also use the shorthand version (omitting the --files)

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

To run ad-hoc map-reduce aggregations on newly written log lines you must add a query. The following example follows all remote log lines and prints out every few seconds the result to standard output.

Hint: To run a map-reduce query across log lines written in the past, please use the dmap command instead.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files <font color="#808080">'/var/log/dserver/*.log'</font> \

--query <font color="#808080">'from STATS select sum($goroutines),sum($cgocalls),</font>

last($time),max(lifetimeConnections)'

Beware: For map-reduce queries to work, you have to ensure that DTail supports your log format. Check out the documentaiton of the DTail query language and the DTail log formats on the DTail homepage for more information.

Hint: You can also use the shorthand version:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files <font color="#808080">'/var/log/dserver/*.log'</font> \

<font color="#808080">'from STATS select sum($goroutines),sum($cgocalls),</font>

last($time),max(lifetimeConnections)'

Here is another example:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files <font color="#808080">'/var/log/dserver/*.log'</font> \

--query <font color="#808080">'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,</font>

lifetimeConnections group by $hostname order by max($cgocalls)'

You can also continuously append the results to a CSV file by adding outfile append filename.csv to the query:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files <font color="#808080">'/var/log/dserver/*.log'</font> \

--query <font color="#808080">'from STATS select ... outfile append result.csv'</font>

The following example demonstrates how to cat files (display the full content of the files) on several servers at once.

As you can see in this example, a DTail client also creates a local log file of all received data in ~/log. You can also use the noColor and -plain flags (this all also work with other DTail commands than dcat).

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Hint: You can also use the shorthand version:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

The following example demonstrates how to grep files (display only the lines which match a given regular expression) of multiple servers at once. In this example, we look after some entries in /etc/passwd. This time, we don't provide the server list via an file but rather via a comma separated list directly on the command line. We also explore the -before, -after and -max flags (see animation).

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files /etc/passwd \

--regex nologin

Generally, dgrep is also a very useful way to search historic application logs for certain content.

Hint: -regex is an alias for -grep.

To run a map-reduce aggregation over logs written in the past, the dmap command can be used. The following example aggregates all map-reduce fields dmap will print interim results every few seconds. You can also write the result to an CSV file by adding outfile result.csv to the query.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--files <font color="#808080">'/var/log/dserver/*.log'</font> \

--query <font color="#808080">'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,</font>

lifetimeConnections group by $hostname order by max($cgocalls)'

Remember: For that to work, you have to make sure that DTail supports your log format. You can either use the ones already defined in internal/mapr/logformat or add an extension to support a custom log format. The example here works out of the box though, as DTail understands its own log format already.

Until now, all examples so far required to have remote server(s) to connect to. That makes sense, as after all DTail is a distributed tool. However, there are circumstances where you don't really need to connect to a server remotely. For example, you already have a login shell open to the server an all what you want is to run some queries directly on local log files.

The serverless mode does not require any dserver up and running and therefore there is no networking/SSH involved.

All commands shown so far also work in a serverless mode. All what needs to be done is to omit a server list. The DTail client then starts in serverless mode.

The following dmap example is the same as the previously shown one, but the difference is that it operates on a local log file directly:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

--query <font color="#808080">'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,</font>

lifetimeConnections group by $hostname order by max($cgocalls)'

As a shorthand version the following command can be used:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

lifetimeConnections group by $hostname order by max($cgocalls)' \

    /var/log/dsever/dserver.log

You can also use a file input pipe as follows:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

dmap <font color="#808080">'from STATS select $hostname,max($goroutines),max($cgocalls),$loadavg,</font>

lifetimeConnections group by $hostname order by max($cgocalls)'

In essence, this works exactly like aggregating logs. All files operated on must be valid CSV files and the first line of the CSV must be the header. E.g.:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

name,lastname,age,profession

Michael,Jordan,40,Basketball player

Michael,Jackson,100,Singer

Albert,Einstein,200,Physician

% dmap --query 'select lastname,name where age > 40 logformat csv outfile result.csv' example.csv

% cat result.csv

lastname,name

Jackson,Michael

Einstein,Albert

DMap can also be used to query and aggregate CSV files from remote servers.

The serverless mode works transparently with all other DTail commands. Here are some examples:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

# Should show no differences.

diff /etc/test /etc/passwd

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Use --help for more available options. Or go to the DTail page for more information! Hope you find DTail useful!

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-22 DTail - The distributed log tail program

2022-03-06 The release of DTail 4.0.0

2022-10-30 Installing DTail on OpenBSD

2023-09-25 DTail usage examples (You are currently reading this)

I hope you find the tools presented in this post useful!

Paul

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Site Reliability Engineering - Part 1: SRE and Organizational Culture</title>

    <link href="gemini://foo.zone/gemfeed/2023-08-18-site-reliability-engineering-part-1.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-08-18-site-reliability-engineering-part-1.gmi</id>

    <updated>2023-08-18T22:43:47+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Being a Site Reliability Engineer (SRE) is like stepping into a lively, ever-evolving universe. The world of SRE mixes together different tech, a unique culture, and a whole lot of determination. It’s one of the toughest but most exciting jobs out there. There's zero chance of getting bored because there's always a fresh challenge to tackle and new technology to play around with. It's not just about the tech side of things either; it's heavily rooted in communication, collaboration, and teamwork. As someone currently working as an SRE, I’m here to break it all down for you in this blog series. Let's dive into what SRE is really all about!</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='site-reliability-engineering---part-1-sre-and-organizational-culture'>Site Reliability Engineering - Part 1: SRE and Organizational Culture</h1><br />

Published at 2023-08-18T22:43:47+03:00

Being a Site Reliability Engineer (SRE) is like stepping into a lively, ever-evolving universe. The world of SRE mixes together different tech, a unique culture, and a whole lot of determination. It’s one of the toughest but most exciting jobs out there. There's zero chance of getting bored because there's always a fresh challenge to tackle and new technology to play around with. It's not just about the tech side of things either; it's heavily rooted in communication, collaboration, and teamwork. As someone currently working as an SRE, I’m here to break it all down for you in this blog series. Let's dive into what SRE is really all about!

2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture (You are currently reading this)

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance

2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture

2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers

▓▓▓▓░░

DC on fire:

            ▓▓                                    ▓▓                ▓▓                

  ░░  ░░    ▓▓▓▓                  ██                  ░░            ▓▓▓▓        ▓▓    

▓▓░░░░  ░░  ▓▓▓▓                              ▓▓░░                  ▓▓▓▓              

░░░░      ▓▓▓▓▓▓        ▓▓      ▓▓            ▓▓                  ▓▓▓▓▓▓      ▓▓      

▓▓░░    ▓▓▒▒▒▒▓▓▓▓    ▓▓        ▓▓▓▓        ▓▓▓▓▓▓              ▓▓▒▒▒▒▓▓▓▓    ▓▓▓▓    

██▓▓ ▓▓▒▒░░▒▒▓▓ ▓▓██ ▓▓▓▓▓▓ ▓▓▒▒▓▓ ▓▓▒▒░░▒▒▓▓ ██▓▓▓▓

▓▓▓▓██ ▓▓▒▒░░░░▒▒▓▓ ▓▓▓▓ ▓▓▒▒▒▒▓▓ ▓▓▒▒░░▒▒▓▓██▓▓ ▓▓▒▒░░░░▒▒▓▓ ▓▓▒▒▒▒▓▓

▓▓▒▒▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒▒▒▓▓ ▓▓▓▓░░▒▒▓▓ ▓▓▒▒░░▒▒▓▓▒▒▒▒▓▓ ▓▓▒▒░░▒▒▓▓▓▓▓▓▓▓░░▒▒▓▓

▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓ ▓▓▒▒░░▒▒▓▓ ▓▓░░░░▒▒▒▒░░░░▒▒██████▒▒░░▒▒██▓▓▓▓▒▒░░▒▒▓▓██

░░░░▒▒▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒░░▒▒▓▓██▒▒░░░░▒▒▓▓ ▓▓▒▒░░▒▒▓▓▒▒▒▒░░▒▒▓▓▓▓▒▒░░▒▒▓▓▓▓▓▓▒▒░░░░▒▒▓▓▓▓

░░░░▒▒▓▓▒▒░░░░▓▓██▒▒░░░░▒▒▓▓██▒▒░░░░▒▒██▓▓▓▓▒▒░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▒▒░░░░██▓▓▓▓▒▒░░░░▒▒████

▒▒░░▒▒▓▓▓▓░░░░▒▒▓▓▒▒▒▒░░░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▓▓▒▒░░░░▒▒▓▓▒▒░░▒▒▓▓▓▓▓▓░░░░▒▒▓▓▓▓▓▓▒▒░░░░▒▒▓▓

▒▒░░▒▒▓▓▒▒▒▒░░▒▒██▒▒▒▒░░▒▒▒▒██▒▒▒▒░░░░░░▒▒▓▓▒▒░░░░▒▒▒▒░░░░▒▒████▒▒▒▒░░▒▒██▓▓▒▒▒▒░░░░░░▒▒

░░░░░░▒▒░░░░░░░░▒▒▒▒▒▒░░░░▒▒▒▒▒▒░░░░░░░░▒▒▒▒░░░░░░▒▒▒▒░░░░░░▒▒▒▒░░░░░░░░▒▒▒▒▒▒░░░░░░░░▒▒

░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░▒▒░░░░░░░░░░░░░░░░░░

At the core of SRE is the principle of "prevention over cure." Unlike traditional IT setups that mostly react to problems, SRE focuses on spotting issues before they happen. This proactive approach involves using Service Level Indicators (SLIs) and Service Level Objectives (SLOs). These tools give teams specific metrics and targets to aim for, helping them keep systems reliable and users happy. It's all about creating a culture that prioritizes user experience and makes sure everything runs smoothly to meet their needs.

Another key concept in SRE is the "error budget." It’s a clever approach that recognizes no system is perfect and that failures will happen. Instead of punishing mistakes, SRE culture embraces them as chances to learn and improve. The idea is to give teams a "budget" for errors, creating a space where innovation can thrive and failures are simply seen as lessons learned.

SRE isn't just about tech and metrics; it's also about people. It tackles the "hero culture" that often ends up burning out IT teams. Sure, having a hero swoop in to save the day can be great, but relying on that all the time just isn’t sustainable. Instead, SRE focuses on collective expertise and teamwork. It recognizes that heroes are at their best within a solid team, making the need for constant heroics unnecessary. This way of thinking promotes a balanced on-call experience and highlights trust, ownership, good communication, and collaboration as key to success. I've been there myself, falling into the hero trap, and I know firsthand that it's just not feasible to be the go-to person for every problem that comes up.

Also, the SRE model puts a big emphasis on good documentation. It's not enough to just have docs; they need to be top-notch and go through the same quality checks as code. This really helps with onboarding new team members, training, and keeping everyone on the same page.

Adopting SRE can be a big challenge for some organizations. They might think the SRE approach goes against their goals, like preferring to roll out new features quickly rather than focusing on reliability, or seeing SRE practices as too much hassle. Building an SRE culture often means taking the time to explain things patiently and showing the benefits, like faster release cycles and a better user experience.

Monitoring and observability are also big parts of SRE, highlighting the need for top-notch tools to query and analyze data. This aligns with the SRE focus on continuous learning and being adaptable. SREs naturally need to be curious, ready to dive into any strange issues, and always open to picking up new tools and practices.

For SRE to really work in any organization, everyone needs to buy into its principles. It's about moving away from working in isolated silos and relying on SRE to just patch things up. Instead, it’s about making reliability a shared responsibility across the whole team.

In short, bringing SRE principles into the mix goes beyond just the technical stuff. It helps shift the whole organizational culture to value things like preventing issues before they happen, always learning, working together, and being open with communication. When SRE and corporate culture blend well, you end up with not just reliable systems but also a strong, resilient, and forward-thinking workplace.

Organizations that have SLIs, SLOs, and error budgets in place are already pretty far along in their SRE journey. Getting there takes a lot of communication, convincing people, and patience.

Continue with the second part of this series:

2023-11-19 Site Reliability Engineering - Part 2: Operational Balance

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Gemtexter 2.1.0 - Let's Gemtext again³</title>

    <link href="gemini://foo.zone/gemfeed/2023-07-21-gemtexter-2.1.0-lets-gemtext-again-3.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-07-21-gemtexter-2.1.0-lets-gemtext-again-3.gmi</id>

    <updated>2023-07-21T10:19:31+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I proudly announce that I've released Gemtexter version `2.1.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='gemtexter-210---let-s-gemtext-again'>Gemtexter 2.1.0 - Let&#39;s Gemtext again³</h1><br />

Published at 2023-07-21T10:19:31+03:00

I proudly announce that I've released Gemtexter version 2.1.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.

https://codeberg.org/snonux/gemtexter

-=[ typewriters ]=- 1/98

                                    .-------.

   .-------.                       _|~~ ~~  |_

  _|~~ ~~  |_       .-------.    =(_|_______|_)

=(_|_______|_)=    _|~~ ~~  |_     |:::::::::|

  |:::::::::|    =(_|_______|_)    |:::::::[]|

  |:::::::[]|      |:::::::::|     |o=======.|

  |o=======.|      |:::::::[]|     `"""""""""`

jgs """"""""" |o=======.|

mod. by Paul Buetow """""""""

This project is too complex for a Bash script. Writing it in Bash was to try out how maintainable a "larger" Bash script could be. It's still pretty maintainable and helps me try new Bash tricks here and then!

Let's list what's new!

Many (almost all) of the tools and commands (GNU Bash, GMU Sed, GNU Date, GNU Grep, GNU Source Highlight) used by Gemtexter are licensed under the GPL anyway. So why not use the same? This was an easy switch, as I was the only code contributor so far!

The HTML output now supports source code highlighting, which is pretty neat if your site is about programming. The requirement is to have the source-highlight command, which is GNU Source Highlight, to be installed. Once done, you can annotate a bare block with the language to be highlighted. E.g.:

if [ -n "$foo" ]; then

echo "$foo"

fi

The result will look like this (you can see the code highlighting only in the Web version, not in the Geminispace version of this site):

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

echo "$foo"

fi

Please run source-highlight --lang-list for a list of all supported languages.

Gemtexter is there to convert your Gemini Capsule into other formats, such as HTML and Markdown. An HTML exact variant can now be enabled in the gemtexter.conf by adding the line declare -rx HTML_VARIANT=exact. The HTML/CSS output changed to reflect a more exact Gemtext appearance and to respect the same spacing as you would see in the Geminispace.

The Hack web font is a typeface designed explicitly for source code. It's a derivative of the Bitstream Vera and DejaVu Mono lineage, but it features many improvements and refinements that make it better suited to reading and writing code.

The font has distinctive glyphs for every character, which helps to reduce confusion between similar-looking characters. For example, the characters "0" (zero), "O" (capital o), and "o" (lowercase o), or "1" (one), "l" (lowercase L), and "I" (capital i) all have distinct looks in Hack, making it easier to read and understand code at a glance.

Hack is open-source and freely available for use and modification under the MIT License.

The following link explains how URL verification works in Mastodon:

https://joinmastodon.org/verification

So we have to hyperlink to the Mastodon profile to be verified and also to include a rel='me' into the tag. In order to do that add this to the gemtexter.conf (replace the URI to your Mastodon profile accordingly):

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

and add the following into your index.gmi:

=> https://fosstodon.org/@snonux Me at Mastodon

The resulting line in the HTML output will be something as follows:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-24 Welcome to the Geminispace

2021-06-05 Gemtexter - One Bash script to rule it all

2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again²

2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³ (You are currently reading this)

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'Software Developmers Career Guide and Soft Skills' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2023-07-17-career-guide-and-soft-skills-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-07-17-career-guide-and-soft-skills-book-notes.gmi</id>

    <updated>2023-07-17T04:56:20+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These notes are of two books by 'John Sommez' I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='software-developmers-career-guide-and-soft-skills-book-notes'>"Software Developmers Career Guide and Soft Skills" book notes</h1><br />

Published at 2023-07-17T04:56:20+03:00

These notes are of two books by "John Sommez" I found helpful. I also added some of my own keypoints to it. These notes are mainly for my own use, but you might find them helpful, too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

When you learn something new, e.g. a programming language, first gather an overview, learn from multiple sources, play around and learn by doing and not consuming and form your own questions. Don't read too much upfront. A large amount of time is spent in learning technical skills which were never use. You want to have a practical set of skills you are actually using. You need to know 20 percent to get out 80 percent of the results.

Fake it until you make it. But be honest about your abilities or lack of. There is however only time between now and until you make it. Refer to your abilities to learn.

Boot camps: The advantage of a boot camp is to pragmatically learn things fast. We almost always overestimate what we can do in a day. Especially during boot camps. Connect to others during the boot camps

Your own goals are important but the manager also looks at how the team performs and how someone can help the team perform better. Check whether you are on track with your goals every 2 weeks in order to avoid surprises for the annual review. Make concrete goals for next review. Track and document your progress. Invest in your education. Make your goals known. If you want something, then ask for it. Nobody but you knows what you want.

That's a trap: If you have to rate yourself, that's a trap. That never works in an unbiased way. Rate yourself always the best way but rate your weakest part as high as possible minus one point. Rate yourself as good as you can otherwise. Nobody is putting for fun a gun on his own head.

The most valuable employees are the ones who make themselves obsolete and automate all away. Keep a safety net of 3 to 6 months of finances. Safe at least 10 percent of your earnings. Also, if you make money it does not mean that you have to spent more money. Is a new car better than a used car which both can bring you from A to B? Liability vs assets.

Hard work is necessary for accomplish results. However, work smarter not harder. Furthermore, working smart is not a substitute for working hard. Work both, hard and smart.

Hard vs fun: Both engage the brain (video games vs work). Some work is hard and other is easy. Hard work is boring. The harsh truth is you have to put in hard and boring work in order to accomplish and be successful. Work won't be always boring though, as joy will follow with mastery.

Defeat is finally give up. Failure is the road to success, embrace it. Failure does not define you but how you respond to it. Events don't make your unhappy, but how you react to events do.

The larger your empire is, the larger your circle of influence is. The larger the circle of influence is, the more opportunities you have.

Become visible, keep track that you accomplishments. E.g. write a weekly summary. Do presentations, be seen. Learn new things and share your learnings. Be the problem solver and not the blamer.

Make use of time boxing via the Pomodoro technique: Set a target of rounds and track the rounds. That give you exact focused work time. That's really the trick. For example set a goal of 6 daily pomodores.

You should feel good of the work done even if you don't finished the task. You will feel good about pomodoro wise even you don't finish the task on hand yet. Helps you to enjoy time off more. Working longer may not sell anything.

Defined quota of things done. E.g. N runs per week or M Blog posts per month or O pomodoros per week. This helps with consistency. Truly commit to these quotas. Failure is not an option. Start with small commitments. Don't commit to something you can't fulfill otherwise you set yourself up for failure.

The biggest time waster is TV watching. The TV is programming you. It's insane that Americans watch so much TV as they work full time. Schedule one show at a time and watch it when you want to watch it. Most movies are crap anyways. The good movies will come to you as people will talk about them.

Try to have as many good habits as possible. Start with easy habits, and make them a little bit more challenging over time. Set ankers and rewards. Over time the routines will become habits naturally.

Habit stacking is effective, which is combining multiple habits at the same time. For example you can workout on a circular trainer while while watching a learning video on O'Reilly Safari Online while getting closer to your weekly step goal.

Avoid overwork hours. That's not as beneficial as you might think and comes only with very small rewards. Invest rather in yourself and not in your employer.

Use your most productive hours to work on you. Make that your priority. Take care of yourself a priority (E.g. do workouts or learn a new language). You can always workout 2 or 1 hour per day, but will you pay the price?

Become the person you want to become (your self image). Program your brain unconsciously. Don't become the person other people want you to be. Embrace yourself, you are you.

In most cases burnout is just an illusion. If you don't have motivation push through the wall. People usually don't pass the wall as they feel they are burned out. After pushing through the wall you will have the most fun, for example you will be able playing the guitar greatly.

Utilise a standing desk and treadmill (you could walk and type at the same time). Increase the incline in order to burn more calories. Even on the standing desk you burn more calories than sitting. When you use pomodoro then you can use the small breaks for push-ups (maybe won't do as good when you are in a fasted state).

Intermittent fasting is an effective method to maintain weight and health. But it does not mean that you can only eat junk food in the feeding windows. Also, diet and nutrition is the most important for health and fitness. They make it also easier to stay focused and positive.

Avoid drama at work. Where are humans there is drama. You can decide where to spent your energy in. But don't avoid conflict. Conflict is healthy in any kind of relationship. Be tactful and state your opinion. The goal is to find the best solution to the problem.

Don't worry about other people what they do and don't do. You only worry about you. Shut up and get your own things done. But you could help to inspire a not working colleague.

You have to learn how to work in a team. Be honest but tactful. It's not too be the loudest but about selling your ideas. Don't argue otherwise you won't sell anything. Be persuasive by finding the common ground. Or lead the colleagues to your idea and don't sell it upfront. Communicate clearly.

Have a blog. Schedule your posts. Consistency beats every other factor. E.g. post once a month a new post. Find your voice, you don't have to sound academic. Keep writing, if you keep it long enough the rewards will be coming. Your own blog can take 5 years to take off. Most people give up too soon.

Ask people so they talk about themselves. They are not really interested in you. Use meetup.com to find groups you are interested and build up the network over time. Don't drink on social networking events even when others do. Talking to other people at events only has upsides. Just saying "hi" and introducing yourself is enough. What worse can happen? If the person rejects you so what, life goes on. Ask open questions and no "yes" and "no" questions. E.g.: "What is your story, why are you here?".

Before your talk go on stage 10 minutes in advance. Introduce yourself to the front row people. During the talk they will smile at you and encourage you during your talk.

Just do it. Just go to conferences. Even if you are not speaking. Sell your boss what you would learn and "this and that" and you would present the learnings to the team afterwards.

If you are specialized then there is a better chance to get a fitting job. No one will hire a general lawyer if there are specialized lawyers available. Even if you are specialized, you will have a wide range of skills (T-shape knowledge).

Not all companies are equal. They have individual cultures and guidelines.

Work in a tech. company if you want to work on/with cutting edge technologies.

Get a professional resume writer. Get referrals of writers and get samples from there. Get sufficient with algorithm and data structures interview questions. Cracking the coding interview book and blog

Invest in your dress code as appearance masters. It does make sense to invest in your style. You could even hire a professional stylist (not my personal way though).

When leaving a job make a clean and non personal as possible. Never complain and never explain. Don't worry about abandonment of the team. Everybody is replacement and you make a business decision. Don't threaten to quit as you are replaceable.

Unit testing Vs regression testing: Unit tests test the smallest possible unit and get rewritten if the unit gets changed. It's like programming against a specification n. Regression tests test whether the software still works after the change. Now you know more than most software engineers.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes (You are currently reading this)

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>KISS server monitoring with Gogios</title>

    <link href="gemini://foo.zone/gemfeed/2023-06-01-kiss-server-monitoring-with-gogios.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-06-01-kiss-server-monitoring-with-gogios.gmi</id>

    <updated>2023-06-01T21:10:17+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Gogios is a minimalistic and easy-to-use monitoring tool I programmed in Google Go designed specifically for small-scale self-hosted servers and virtual machines. The primary purpose of Gogios is to monitor my personal server infrastructure for `foo.zone`, my MTAs, my authoritative DNS servers, my NextCloud, Wallabag and Anki sync server installations, etc.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='kiss-server-monitoring-with-gogios'>KISS server monitoring with Gogios</h1><br />

Published at 2023-06-01T21:10:17+03:00

Gogios is a minimalistic and easy-to-use monitoring tool I programmed in Google Go designed specifically for small-scale self-hosted servers and virtual machines. The primary purpose of Gogios is to monitor my personal server infrastructure for foo.zone, my MTAs, my authoritative DNS servers, my NextCloud, Wallabag and Anki sync server installations, etc.

With compatibility with the Nagios Check API, Gogios offers a simple yet effective solution to monitor a limited number of resources. In theory, Gogios scales to a couple of thousand checks, though. You can clone it from Codeberg here:

https://codeberg.org/snonux/gogios

_____________________________    ____________________________

/ \ / \

| _______________________ || ______________________ |

| / \ || / \ |

| | # Alerts with status c| || | # Unhandled alerts: | |

| | hanged: | || | | |

| | | || | CRITICAL: Check Pizza| |

| | OK->CRITICAL: Check Pi| || | : Late delivery | |

| | zza: Late delivery | || | | |

| | | || | WARNING: Check Thirst| |

| | | || | : OutofKombuchaExcept| |

| __/ || _/ |

| /|\ GOGIOS MONITOR 1 _ || /|\ GOGIOS MONITOR 2 _ |

__/ _/

 !_________________________!      !________________________!


ASCII art was modified by Paul Buetow

The original can be found at

https://asciiart.website/index.php?art=objects/computers

With experience in monitoring solutions like Nagios, Icinga, Prometheus and OpsGenie, these tools often came with many features that I didn't necessarily need for personal use. Contact groups, host groups, check clustering, and the requirement of operating a DBMS and a WebUI added complexity and bloat to my monitoring setup.

My primary goal was to have a single email address for notifications and a simple mechanism to periodically execute standard Nagios check scripts and notify me of any state changes. I wanted the most minimalistic monitoring solution possible but wasn't satisfied with the available options.

This led me to create Gogios, a lightweight monitoring tool tailored to my specific needs. I chose the Go programming language for this project as it comes, in my opinion, with the best balance of ease to use and performance.

This is an example alert report received via E-Mail. Whereas, [C:2 W:0 U:0 OK:51] means that we've got two alerts in status critical, 0 warnings, 0 unknowns and 51 OKs.

Subject: GOGIOS Report [C:2 W:0 U:0 OK:51]

This is the recent Gogios report!

Alerts with status changed:

OK->CRITICAL: Check ICMP4 vulcan.buetow.org: Check command timed out

OK->CRITICAL: Check ICMP6 vulcan.buetow.org: Check command timed out

Unhandled alerts:

CRITICAL: Check ICMP4 vulcan.buetow.org: Check command timed out

CRITICAL: Check ICMP6 vulcan.buetow.org: Check command timed out

Have a nice day!

This document is primarily written for OpenBSD, but applying the corresponding steps to any Unix-like (e.g. Linux-based) operating system should be easy. On systems other than OpenBSD, you may always have to replace does with the sudo command and replace the /usr/local/bin path with /usr/bin.

To compile and install Gogios on OpenBSD, follow these steps:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

cd gogios

go build -o gogios cmd/gogios/main.go

doas cp gogios /usr/local/bin/gogios

doas chmod 755 /usr/local/bin/gogios

You can use cross-compilation if you want to compile Gogios for OpenBSD on a Linux system without installing the Go compiler on OpenBSD. Follow these steps:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

export GOARCH=amd64

go build -o gogios cmd/gogios/main.go

On your OpenBSD system, copy the binary to /usr/local/bin/gogios and set the correct permissions as described in the previous section. All steps described here you could automate with your configuration management system of choice. I use Rexify, the friendly configuration management system, to automate the installation, but that is out of the scope of this document.

https://www.rexify.org

It is best to create a dedicated system user and group for Gogios to ensure proper isolation and security. Here are the steps to create the _gogios user and group under OpenBSD:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

doas usermod -d /var/run/gogios _gogios

doas mkdir -p /var/run/gogios

doas chown _gogios:_gogios /var/run/gogios

doas chmod 750 /var/run/gogios

Please note that creating a user and group might differ depending on your operating system. For other operating systems, consult their documentation for creating system users and groups.

Gogios relies on external Nagios or Icinga monitoring plugin scripts. On OpenBSD, you can install the monitoring-plugins package with Gogios. The monitoring-plugins package is a collection of monitoring plugins, similar to Nagios plugins, that can be used to monitor various services and resources:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

doas pkg_add nrpe # If you want to execute checks remotely via NRPE.

Once the installation is complete, you can find the monitoring plugins in the /usr/local/libexec/nagios directory, which then can be configured to be used in gogios.json.

Gogios requires a local Mail Transfer Agent (MTA) such as Postfix or OpenBSD SMTPD running on the same server where the CRON job (see about the CRON job further below) is executed. The local MTA handles email delivery, allowing Gogios to send email notifications to monitor status changes. Before using Gogios, ensure that you have a properly configured MTA installed and running on your server to facilitate the sending of emails. Once the MTA is set up and functioning correctly, Gogios can leverage it to send email notifications.

You can use the mail command to send an email via the command line on OpenBSD. Here's an example of how to send a test email to ensure that your email server is working correctly:

echo 'This is a test email from OpenBSD.' | mail -s 'Test Email' your-email@example.com

Check the recipient's inbox to confirm the delivery of the test email. If the email is delivered successfully, it indicates that your email server is configured correctly and functioning. Please check your MTA logs in case of issues.

To configure Gogios, create a JSON configuration file (e.g., /etc/gogios.json). Here's an example configuration:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

"EmailTo": "paul@dev.buetow.org",

"EmailFrom": "gogios@buetow.org",

"CheckTimeoutS": 10,

"CheckConcurrency": 2,

"StateDir": "/var/run/gogios",

"Checks": {

"Check ICMP4 www.foo.zone": {

  "Plugin": "<font color="#808080">/usr/local/libexec/nagios/check_ping</font>",

  "Args": [ "-H", "www.foo.zone", "-4", "-w", "50,10%", "-c", "100,15%" ],

  "Retries": <font color="#000000">3</font>,

  "RetryInterval": <font color="#000000">10</font>

},

"Check ICMP6 www.foo.zone": {

  "Plugin": "<font color="#808080">/usr/local/libexec/nagios/check_ping</font>",

  "Args": [ "-H", "www.foo.zone", "-6", "-w", "50,10%", "-c", "100,15%" ],

  "Retries": <font color="#000000">3</font>,

  "RetryInterval": <font color="#000000">10</font>

},

"www.foo.zone HTTP IPv4": {

  "Plugin": "<font color="#808080">/usr/local/libexec/nagios/check_http</font>",

  "Args": ["www.foo.zone", "-4"],

  "DependsOn": ["Check ICMP4 www.foo.zone"]

},

"www.foo.zone HTTP IPv6": {

  "Plugin": "<font color="#808080">/usr/local/libexec/nagios/check_http</font>",

  "Args": ["www.foo.zone", "-6"],

  "DependsOn": ["Check ICMP6 www.foo.zone"]

}

"Check NRPE Disk Usage foo.zone": {

  "Plugin": "<font color="#808080">/usr/local/libexec/nagios/check_nrpe</font>",

  "Args": ["-H", "foo.zone", "-c", "check_disk", "-p", "5666", "-4"]

}

}

}

Adjust the configuration file according to your needs, specifying the checks you want Gogios to perform.

If you want to execute checks only when another check succeeded (status OK), use DependsOn. In the example above, the HTTP checks won't run when the hosts aren't pingable. They will show up as UNKNOWN in the report.

Retries and RetryInterval are optional check configuration parameters. In case of failure, Gogios will retry Retries times each RetryInterval seconds.

For remote checks, use the check_nrpe plugin. You also need to have the NRPE server set up correctly on the target host (out of scope for this document).

The state.json file mentioned above keeps track of the monitoring state and check results between Gogios runs, enabling Gogios only to send email notifications when there are changes in the check status.

Now it is time to give it a first run. On OpenBSD, do:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

To run Gogios via CRON on OpenBSD as the gogios user and check all services once per minute, follow these steps:

Type doas crontab -e -u _gogios and press Enter to open the crontab file for the _gogios user for editing and add the following lines to the crontab file:

0 7 * * * /usr/local/bin/gogios -renotify -cfg /etc/gogios.json

Gogios is now configured to run every five minutes from 8 am to 10 pm via CRON as the _gogios user. It will execute the checks and send monitoring status whenever a check status changes via email according to your configuration. Also, Gogios will run once at 7 am every morning and re-notify all unhandled alerts as a reminder.

To create a high-availability Gogios setup, you can install Gogios on two servers that will monitor each other using the NRPE (Nagios Remote Plugin Executor) plugin. By running Gogios in alternate CRON intervals on both servers, you can ensure that even if one server goes down, the other will continue monitoring your infrastructure and sending notifications.

There are plans to make it possible to execute certain checks only on certain nodes (e.g. on elected leader or master nodes). This is still in progress (check out my Gorum Git project).

Gogios is a lightweight and straightforward monitoring tool that is perfect for small-scale environments. With its compatibility with the Nagios Check API, email notifications, and CRON-based scheduling, Gogios offers an easy-to-use solution for those looking to monitor a limited number of resources. I personally use it to execute around 500 checks on my personal server infrastructure. I am very happy with this solution.

E-Mail your comments to paul@nospam.buetow.org :-)

Other KISS-related posts are:

2021-09-12 Keep it simple and stupid

2023-06-01 KISS server monitoring with Gogios (You are currently reading this)

2023-10-29 KISS static web photo albums with photoalbum.sh

2024-04-01 KISS high-availability with OpenBSD

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'The Obstacle is the Way' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2023-05-06-the-obstacle-is-the-way-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-05-06-the-obstacle-is-the-way-book-notes.gmi</id>

    <updated>2023-05-06T17:23:16+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'The Obstacle Is the Way' by Ryan Holiday. This is mainly for my own use, but you might find it helpful too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='the-obstacle-is-the-way-book-notes'>"The Obstacle is the Way" book notes</h1><br />

Published at 2023-05-06T17:23:16+03:00

These are my personal takeaways after reading "The Obstacle Is the Way" by Ryan Holiday. This is mainly for my own use, but you might find it helpful too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

"The obstacle is the way" is a powerful statement that encapsulates the wisdom of turning challenges into opportunities for growth and success. We will explore using obstacles as fuel, transforming weaknesses into strengths, and adopting a mindset that allows us to be creative and persistent in the face of adversity.

The obstacle in your path can become your path to success. Instead of being paralyzed by challenges, see them as opportunities to learn and grow. Remember, the things that hurt us often instruct us.

We spend a lot of time trying to get things perfect and look at the rules, but what matters is that it works; it doesn't need to be after the book. Focus on results rather than on beautiful methods. In Jujitsu, it does matter that you bring your opponent down, but not how. There are many ways from point A to point B; it doesn't need to be a straight line. So many try to find the best solution but need to catch up on what is in Infront of them. Think progress and not perfection.

Don't always try to use the front door; a backdoor could open. It's nonsense. Don't fight the judo master with judo. Non-action can be action, exposing the weaknesses of others.

It is a superpower to see things rationally when others are fearful. Focus on the reality of the situation without letting emotions, such as anger, cloud your judgment. This ability will enable you to make better decisions in adversity. Ability to see things what they really are. E.g. wine is old fermented grapes, or other people behaving like animals during a fight. Show the middle finger if someone persists on the stupid rules occasionally.

You can choose how you respond to obstacles. Focus on what you can control, and don't let yourself feel harmed by external circumstances. Remember, you decide how things affect you; nobody else does. Choose to feel good in response to any situation. Embrace the challenges and obstacles that come your way, as they are opportunities for growth and learning.

Martial artists know the importance of developing physical and emotional strength. Cultivate the art of not panicking; it will help you avoid making mistakes during high-pressure situations.

Focus on what you can control. Don't choose to feel harmed, and then you won't be harmed. I decide things that affect me; nobody else does. E.g., in prison, your mind stays your own. Don't ignore fear but explain it away, have a different view.

Practice persistence and patience in your pursuits. Focus on the process rather than the prize and take one step at a time. Remember, the journey is about finishing tasks, projects, or workouts to the best of your ability. Never be in a hurry and never be desperate. There is no reason to be rushed; there are all in the long haul. Follow the process and not the price. Take it one step at a time. The process is about finishing (workout, task, project, etc.).

Failure is a natural part of life and can make us stronger. Treat defeat as a stepping stone to success and education. What is defeat? The first step to education. Failure makes you stronger. If we do our best, we can be proud of it, regardless of the result. Do your job, but do it right. Only an asshole thinks he is too good at the things he does. Also, asking for forgiveness is easier than asking for permission.

There are many ways to achieve your goals; sometimes, unconventional methods are necessary. Feel free to break the rules or go off the beaten path if it will lead to better results. Transform weaknesses into strengths. We have a choice of how to respond to things. It's not about being positive but to be creative. Aim high, but stuff will happen; E.g., surprises will always happen.

We constantly push to the next thing. Sometimes the best course of action is standing still or even going backwards. Obstacles might resolve by themselves. Or going sideways. Sometimes, the best action is to stand still, go sideways, or even go backwards. Obstacles may resolve themselves or present new opportunities if you're patient and observant. People always want your input before you have all the facts. They want you to play after their rules. The question is, do you let them? The English call it the cool head. Being in control of Stress; requires practice. Appear, the absence of fear (Greek). When all others do it one way, it does not mean it is the correct or best practice.

In times of crisis, seize the chance to do things never done before. Great people use negative situations to their advantage and become the most effective in challenging circumstances.

The art of not panicking; otherwise, you will make mistakes. When overs are shocked, you know which way to take due to your thinking of the problem at Hand. A crisis gives you a chance to do things which never done before. Ordinary people shy from negative situations; great people use these for their benefit and are the most effective. The obstacle is not just turned upside down but used as a catapult.

Be prepared for nothing to work. Problems are an opportunity to do your best, not to do miracles. Always manage your expectations. It will suck, but it will be ok. Be prepared to begin from the beginning. Be cheerful and eagerly work on the next obstacle. Each time you become better. Life is not a sprint but a marathon. After each obstacle lies another obstacle, there won't be anything without obstacles. Passing one means you are ready for the next.

Develop your inner strength during good times so you can rely on it in bad times. Always prepare for adversity and face it with calmness and resilience. Be humble enough that things which happen will happen. Build your inner citadel. In good times strengthen it. In bad times rely on it.

We should always prepare for things to get tough. Your house burns down: no worries, we eliminated much rubbish. Imagine what can go wrong before things go wrong. We are prepared for adversity; it's other people who aren't. Phil Jackson's hip problem example. To receive unexpected benefits, you must first accept the unexpected obstacles. Meditate on death. It's a universal obstacle. Use it as a reminder to do your best.

Turn an obstacle the other way around for your benefit. Use it at fuel. It's simple but challenging. Most are paralyzed instead. The obstacle in the path becomes the path. Obstacles are neither good nor bad. The things which hurt, instruct.

Should I hate people who hate me? That's their problem and not mine. Be always calm and relaxed during the fight. The story of the battle is the story of the smile. Cheerfulness in all situations, especially the bad ones. Love for everything that happens; if it happens, it was meant to happen. We can choose how we react to things, so why not choose to feel good? I love everything that happens. You must never lower yourself to the person you don't like.

Life is a marathon, not a sprint. Each obstacle we overcome prepares us for the next one. Remember, the obstacle is not just a barrier to be turned upside down; it can also be used as a catapult to propel us forward. By embracing challenges and using them as opportunities for growth, we become stronger, more adaptable, and, ultimately, more successful.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes (You are currently reading this)

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Unveiling `guprecords.raku`: Global Uptime Records with Raku</title>

    <link href="gemini://foo.zone/gemfeed/2023-05-01-unveiling-guprecords:-uptime-records-with-raku.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-05-01-unveiling-guprecords:-uptime-records-with-raku.gmi</id>

    <updated>2023-04-30T13:10:26+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>For fun, I am tracking the uptime of various personal machines (servers, laptops, workstations...). I have been doing this for over ten years now, so I have a lot of statistics collected.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='unveiling-guprecordsraku-global-uptime-records-with-raku'>Unveiling <span class='inlinecode'>guprecords.raku</span>: Global Uptime Records with Raku</h1><br />

Published at 2023-04-30T13:10:26+03:00

+-----+-----------------+-----------------------------+

| Pos | Host | Lifespan |

+-----+-----------------+-----------------------------+

| 1. | dionysus | 8 years, 6 months, 17 days |

| 2. | uranus | 7 years, 2 months, 16 days |

| 3. | alphacentauri | 6 years, 9 months, 13 days |

| 4. | *vulcan | 4 years, 5 months, 6 days |

| 5. | sun | 3 years, 10 months, 2 days |

| 6. | uugrn | 3 years, 5 months, 5 days |

| 7. | deltavega | 3 years, 1 months, 21 days |

| 8. | pluto | 2 years, 10 months, 30 days |

| 9. | tauceti | 2 years, 3 months, 22 days |

| 10. | callisto | 2 years, 3 months, 13 days |

+-----+-----------------+-----------------------------+

For fun, I am tracking the uptime of various personal machines (servers, laptops, workstations...). I have been doing this for over ten years now, so I have a lot of statistics collected.

As a result of this, I am introducing guprecords.raku, a handy Raku script that helps me combine uptime statistics from multiple servers into one comprehensive report. In this blog post, I'll explore what Guprecords is and some examples of its application. I will also add some notes on Raku.

Guprecords, or global uptime records, is a Raku script designed to generate a consolidated uptime report from multiple hosts:

https://codeberg.org/snonux/guprecords

The Raku Programming Language

A previous version of Guprecords was actually written in Perl, the older and more established language from which Raku was developed. One of the primary motivations for rewriting Guprecords in Raku was to learn the language and explore its features. Raku is a more modern and powerful language compared to Perl, and working on a real-world project like Guprecords provided a practical and engaging way to learn the language.

Over the last years, I have been reading the following books and resources about Raku:

And I have been following the Raku newsletter, and sometimes I have been lurking around in the IRC channels, too. Watching Raku coding challenges on YouTube was pretty fun, too. However, nothing beats actually using Raku to learn the language. After reading all of these resources, I may have a good idea about the features and paradigms, but I am by far not an expert.

Guprecords works in three stages:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

This command will generate a comprehensive uptime report from the collected statistics, making it easy to review and enjoy the data.

Guprecords supports the following features:

You have already seen an example at the very top of this post, where the hosts were grouped by their total lifespans (uptime+downtime). Here's an example of what the global uptime report (grouped by total host uptimes) might look like:

Top 20 Uptime's by Host

+-----+-----------------+-----------------------------+

| Pos | Host | Uptime |

+-----+-----------------+-----------------------------+

| 1. | *vulcan | 4 years, 5 months, 6 days |

| 2. | uranus | 3 years, 11 months, 21 days |

| 3. | sun | 3 years, 9 months, 26 days |

| 4. | uugrn | 3 years, 5 months, 5 days |

| 5. | deltavega | 3 years, 1 months, 21 days |

| 6. | pluto | 2 years, 10 months, 29 days |

| 7. | tauceti | 2 years, 3 months, 19 days |

| 8. | tauceti-f | 1 years, 9 months, 18 days |

| 9. | *ultramega15289 | 1 years, 8 months, 17 days |

| 10. | *earth | 1 years, 5 months, 22 days |

| 11. | *blowfish | 1 years, 4 months, 20 days |

| 12. | ultramega8477 | 1 years, 3 months, 25 days |

| 13. | host0 | 1 years, 3 months, 9 days |

| 14. | tauceti-e | 1 years, 2 months, 20 days |

| 15. | makemake | 1 years, 1 months, 6 days |

| 16. | callisto | 0 years, 10 months, 31 days |

| 17. | alphacentauri | 0 years, 10 months, 28 days |

| 18. | london | 0 years, 9 months, 16 days |

| 19. | twofish | 0 years, 8 months, 31 days |

| 20. | *fishfinger | 0 years, 8 months, 17 days |

+-----+-----------------+-----------------------------+

This table ranks the top 20 hosts based on their total uptime, with the host having the highest uptime at the top. The hosts marked with * are still active, means stats were collected within the last couple of months.

My up to date stats can be seen here:

My machine uptime stats

Just recently, I decommissioned vulcan (the number one stop from above), which used to be my CentOS 7 (initially CentOS 6) VM hosting my personal NextCloud and Wallabag (which I modernised just recently with a brand new shiny Rocky Linux 9 VM). This was the last uptimed output before shutting it down (it always makes me feel sentimental decommissioning one of my machines :'-():

 #               Uptime | System                                     Boot up

----------------------------+---------------------------------------------------

 1   545 days, 17:58:15 | Linux 3.10.0-1160.15.2.e  Sun Jul 25 19:32:25 2021

 2   279 days, 10:12:14 | Linux 3.10.0-957.21.3.el  Sun Jun 30 12:43:41 2019

 3   161 days, 06:08:43 | Linux 3.10.0-1160.15.2.e  Sun Feb 14 11:05:38 2021

 4   107 days, 01:26:35 | Linux 3.10.0-957.1.3.el7  Thu Dec 20 09:29:13 2018

 5    96 days, 21:13:49 | Linux 3.10.0-1127.13.1.e  Sat Jul 25 17:56:22 2020

-> 6 89 days, 23:05:32 | Linux 3.10.0-1160.81.1.e Sun Jan 22 12:39:36 2023

 7    63 days, 18:30:45 | Linux 3.10.0-957.10.1.el  Sat Apr 27 18:12:43 2019

 8    63 days, 06:53:33 | Linux 3.10.0-1127.8.2.el  Sat May 23 10:41:08 2020

 9    48 days, 11:44:49 | Linux 3.10.0-1062.18.1.e  Sat Apr  4 22:56:07 2020

10    42 days, 08:00:13 | Linux 3.10.0-1127.19.1.e  Sat Nov  7 11:47:33 2020

11    36 days, 22:57:19 | Linux 3.10.0-1160.6.1.el  Sat Dec 19 19:47:57 2020

12    21 days, 06:16:28 | Linux 3.10.0-957.10.1.el  Sat Apr  6 11:56:01 2019

13    12 days, 20:11:53 | Linux 3.10.0-1160.11.1.e  Mon Jan 25 18:45:27 2021

14     7 days, 21:29:18 | Linux 3.10.0-1127.13.1.e  Fri Oct 30 14:18:04 2020

15     6 days, 20:07:18 | Linux 3.10.0-1160.15.2.e  Sun Feb  7 14:57:35 2021

16     1 day , 21:46:41 | Linux 3.10.0-957.1.3.el7  Tue Dec 18 11:42:19 2018

17     0 days, 01:25:57 | Linux 3.10.0-957.1.3.el7  Tue Dec 18 10:16:08 2018

18     0 days, 00:42:34 | Linux 3.10.0-1160.15.2.e  Sun Jul 25 18:49:38 2021

19     0 days, 00:08:32 | Linux 3.10.0-1160.81.1.e  Sun Jan 22 12:30:52 2023

----------------------------+---------------------------------------------------

1up in 6 days, 22:08:18 | at Sat Apr 29 10:53:25 2023

no1 in 455 days, 18:52:44 | at Sun Jul 21 07:37:51 2024

up  1586 days, 00:20:28 | since                     Tue Dec 18 10:16:08 2018

down 0 days, 01:08:32 | since Tue Dec 18 10:16:08 2018

%up 99.997 | since Tue Dec 18 10:16:08 2018

Guprecords is a small, yet powerful tool for analyzing uptime statistics. While developing Guprecords, I have come to truly appreciate and love Raku's expressiveness. The language is designed to be both powerful and flexible, allowing developers to express their intentions and logic more clearly and concisely.

Raku's expressive syntax, support for multiple programming paradigms, and unique features, such as grammars and lazy evaluation, make it a joy to work with.

Working on Guprecords in Raku has been an enjoyable experience, and I've found that Raku's expressiveness has significantly contributed to the overall quality and effectiveness of the script. The language's ability to elegantly express complex logic and data manipulation tasks makes it an excellent choice for developing tools like these, where expressiveness and productiveness are of the utmost importance.

So far, I have only scratched the surface of what Raku can do. I hope to find more time to become a regular Rakoon (a Raku Programmer). I have many Ideas for other small tools like Guprecords, but the challenge is finding the time. I'd love to explore Raku Grammars and also I would love to explore writing concurrent code in Raku (I also love Go (Golang), btw!). Ideas for future Raku personal projects include:

E-Mail your comments to hi@foo.zone :-)

Other related posts are:

2008-06-26 Perl Poetry

2011-05-07 Perl Daemon (Service Framework)

2022-05-27 Perl is still a great choice

2022-06-15 Sweating the small stuff - Tiny projects of mine

2023-05-01 Unveiling guprecords.raku: Global Uptime Records with Raku (You are currently reading this)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'Never split the difference' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2023-04-01-never-split-the-difference-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-04-01-never-split-the-difference-book-notes.gmi</id>

    <updated>2023-04-01T20:00:17+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'Never split the difference' by Chris Voss. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='never-split-the-difference-book-notes'>"Never split the difference" book notes</h1><br />

Published at 2023-04-01T20:00:17+03:00

These are my personal takeaways after reading "Never split the difference" by Chris Voss. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

Be a mirror, copy each other to be comfy with each other to build up trust. Mirroring is mainly body language. A mirror is to repeat the words the other just said. Simple but effective.

Mirror training is like Jedi training. Simple but effective. A mirror needs space. Be silent after "you want this?"

Try to have multiple realities in your mind and use facts to distinguish between real and false.

Try: to put a label on someone's emotion and then be silent. Wait for the other to reveal himself. "You seem unhappy about this?"

When the opponent starts with a "no", he feels in control and comfortable. That's why he has to start with "no".

Get a "That's right" when negotiating. Don't get a "you're right". You can summarise the opponent to get a "that's right".

Win-win is a naive approach when encountering the win-lose counterpart, but always cooperate. Don't compromise, and don't split the difference. We don't compromise because it's right; we do it because it is easy. You must embrace the hard stuff; that's where the great deals are.

The person on the other side is never the issue; the problem is the issue. Keep this in mind to avoid emotional issues with the person and focus on the problem, not the person. The bond is essential; never create an enemy.

I had paid my rent always in time. I had positive experiences with the building and would be sad for the landlord to lose a good tenant. I am looking for a win-win agreement between us. Pulling out the research, other neighbours offer much lower prices even if your building is a better location and services. How can I effort 200 more....

...then put an extreme anker.

You always have to embrace thoughtful confrontation for good negotiation and life. Don't avoid honest, clear conflict. It will give you the best deals. Compromises are mostly bad deals for both sides. Most people don't negotiate a win-win but a win-lose. Know the best and worst outcomes and what is acceptable for you.

Calibrated questions. Give the opponent a sense of power. Ask open-how questions to get the opponent to solve your problem and move him in your direction. Calibrated questions are the best tools. Summarise everything, and then ask, "how I am supposed to do that?". Asking for help this way with a calibrated question is a powerful tool for joint problem solving

Being calm and respectful is essential. Without control of your emotions, it won't work. The counterpart will have no idea how constrained they are with your question. Avoid questions which get a yes or short answers. Use "why?".

Counterparts are more involved if these are their solutions. The counterpart must answer with "that's right", not "you are right". He has to own the problem. If not, then add more why questions.

Prepare 3 to 5 calibrated questions for your counterpart. Be curious what is really motivating the other side. You can get out the "Black Swan".

What we don't know can break our deal. Uncovering it can bring us unexpected success. You get what you ask for in this world, but you must learn to ask correctly. Reveal the black swan by asking questions.

Establish a range at top places like corp. I get... (e.g. remote London on a project basis). Set a high salary range and not a number. Also, check on LinkedIn premium for the salaries.

Slow.... it.... down....

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes

2023-04-01 "Never split the difference" book notes (You are currently reading this)

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Gemtexter 2.0.0 - Let's Gemtext again²</title>

    <link href="gemini://foo.zone/gemfeed/2023-03-25-gemtexter-2.0.0-lets-gemtext-again-2.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-03-25-gemtexter-2.0.0-lets-gemtext-again-2.gmi</id>

    <updated>2023-03-25T17:50:32+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I proudly announce that I've released Gemtexter version `2.0.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='gemtexter-200---let-s-gemtext-again'>Gemtexter 2.0.0 - Let&#39;s Gemtext again²</h1><br />

Published at 2023-03-25T17:50:32+02:00

I proudly announce that I've released Gemtexter version 2.0.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

https://codeberg.org/snonux/gemtexter

This is a new major release, so it contains a breaking change (see "Meta cache made obsolete").

Let's list what's new!

-=[ typewriters ]=- 1/98

   .-------.

  _|~~ ~~  |_       .-------.

=(_|_______|_)=    _|~~ ~~  |_

  |:::::::::|    =(_|_______|_)

  |:::::::[]|      |:::::::::|

  |o=======.|      |:::::::[]|

jgs """"""""" |o=======.|

mod. by Paul Buetow """""""""

Gemtexter now supports templating, enabling dynamically generated content to .gmi files before converting anything to any output format like HTML and Markdown.

A template file name must have the suffix gmi.tpl. A template must be put into the same directory as the Gemtext .gmi file to be generated. Gemtexter will generate a Gemtext file index.gmi from a given template index.gmi.tpl. A <<< and >>> encloses a multiline template. All lines starting with << will be evaluated as a single line of Bash code and the output will be written into the resulting Gemtext file.

For example, the template index.gmi.tpl:

Hello world

<< echo "> This site was generated at $(date --iso-8601=seconds) by `Gemtexter`"

Welcome to this capsule!

<<<

for i in {1..10}; do

echo Multiline template line $i

done

>>>

... results into the following index.gmi after running ./gemtexter --generate (or ./gemtexter --template, which instructs to do only template processing and nothing else):

Hello world

> This site was generated at 2023-03-15T19:07:59+02:00 by Gemtexter

Welcome to this capsule!

Multiline template line 1

Multiline template line 2

Multiline template line 3

Multiline template line 4

Multiline template line 5

Multiline template line 6

Multiline template line 7

Multiline template line 8

Multiline template line 9

Multiline template line 10

Another thing you can do is insert an index with links to similar blog posts. E.g.:

See more entries about DTail and Golang:

<< template::inline::index dtail golang

Blablabla...

... scans all other post entries with dtail and golang in the file name and generates a link list like this:

See more entries about DTail and Golang:

=> ./2022-10-30-installing-dtail-on-openbsd.gmi 2022-10-30 Installing DTail on OpenBSD | ./2022-04-22-programming-golang.gmi 2022-04-22 The Golang Programming language | ./2022-03-06-the-release-of-dtail-4.0.0.gmi 2022-03-06 The release of DTail 4.0.0 | ./2021-04-22-dtail-the-distributed-log-tail-program.gmi 2021-04-22 DTail - The distributed log tail program (You are currently reading this)

Blablabla...

You can configure PRE_GENERATE_HOOK and POST_PUBLISH_HOOK to point to scripts to be executed before running --generate, or after running --publish. E.g. you could populate some of the content by an external script before letting Gemtexter do its thing or you could automatically deploy the site after running --publish.

The sample config file gemtexter.conf includes this as an example now; these scripts will only be executed when they actually exist:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

declare -xr POST_PUBLISH_HOOK=./post_publish_hook.sh

Gemtexter now does set -euf -o pipefile, which helps to eliminate bugs and to catch scripting errors sooner. Previous versions only set -e.

Here is the breaking change to older versions of Gemtexter. The $BASE_CONTENT_DIR/meta directory was made obsolete. meta was used to store various information about all the blog post entries to make generating an Atom feed in Bash easier. Especially the publishing dates of each post were stored there. Instead, the publishing date is now encoded in the .gmi file. And if it is missing, Gemtexter will set it to the current date and time at first run.

An example blog post without any publishing date looks like this:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

# Title here

The remaining content of the Gemtext file...

Gemtexter will add a line starting with > Published at ... now. Any subsequent Atom feed generation will then use that date.

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

# Title here

> Published at 2023-02-26T21:43:51+01:00

The remaining content of the Gemtext file...

Optionally, when the xmllint binary is installed, Gemtexter will perform a simple XML lint check against the Atom feed generated. This is a double-check of whether the Atom feed is a valid XML.

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-24 Welcome to the Geminispace

2021-06-05 Gemtexter - One Bash script to rule it all

2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again² (You are currently reading this)

2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>'The Pragmatic Programmer' book notes</title>

    <link href="gemini://foo.zone/gemfeed/2023-03-16-the-pragmatic-programmer-book-notes.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-03-16-the-pragmatic-programmer-book-notes.gmi</id>

    <updated>2023-03-16T00:55:20+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>These are my personal takeaways after reading 'The Pragmatic Programmer' by David Thomas and Andrew Hunt. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='the-pragmatic-programmer-book-notes'>"The Pragmatic Programmer" book notes</h1><br />

Published at 2023-03-16T00:55:20+02:00

These are my personal takeaways after reading "The Pragmatic Programmer" by David Thomas and Andrew Hunt. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.

     ,..........   ..........,

 ,..,&#39;          &#39;.&#39;          &#39;,..,

,&#39; ,&#39;            :            &#39;, &#39;,

,' ,' : ', ',

,' ,' : ', ',

,' ,'............., : ,.............', ',

,' '............ '.' ............' ',

'''''''''''''''''';''';''''''''''''''''''

                &#39;&#39;&#39;

Think about your work while doing it - every day on every project. Have a feeling of continuous improvement.

No one writes perfect code, including you. However:

Erlang: Defensive programming is a waste of time. Let it crash. "This can never happen" - don't practise that kind of self-deception when programming.

Leave assertions in the code, even in production. Only leave out the assertions causing the performance issues.

Take small steps, always. Get feedback, too, for each of the steps the code does. Avoid fortune telling. If you have to involve in it, then the step is too large.

Decouple the code (e.g. OOP or functional programming). Prefer interfaces for types and mixins for a class extension over class inheritance.

Don't think outside the box. Find the box. The box is more extensive than you think. Think about the hard problem at hand. Do you have to do it a certain way, or do you have to do it at all?

Do what works and not what's fashionable. E.g. does SCRUM make sense? The goal is to deliver deliverables and not to "become" agile.

Add new tools to your repertoire every day and keep the momentum up. Learning new things is your most crucial aspect. Invest regularly in your knowledge portfolio. The learning process extends your thinking. It does not matter if you will never use it.

Think critically about everything you learn. Use paper for your notes. There is something special about it.

It's your life, and you own it. Bruce Lee once said:

"I am not on the world to life after your expectations, neither are you to life after mine."

It's your life. Share it, celebrate it, be proud and have fun.

How to motivate others to contribute something (e.g. ideas to a startup):

A kindly, old stranger was walking through the land when he came upon a village. As he entered, the villagers moved towards their homes, locking doors and windows. The stranger smiled and asked, why are you all so frightened. I am a simple traveler, looking for a soft place to stay for the night and a warm place for a meal. "There's not a bite to eat in the whole province," he was told. "We are weak and our children are starving. Better keep moving on." "Oh, I have everything I need," he said. "In fact, I was thinking of making some stone soup to share with all of you." He pulled an iron cauldron from his cloak, filled it with water, and began to build a fire under it. Then, with great ceremony, he drew an ordinary-looking stone from a silken bag and dropped it into the water. By now, hearing the rumor of food, most of the villagers had come out of their homes or watched from their windows. As the stranger sniffed the "broth" and licked his lips in anticipation, hunger began to overcome their fear. "Ahh," the stranger said to himself rather loudly, "I do like a tasty stone soup. Of course, stone soup with cabbage -- that's hard to beat." Soon a villager approached hesitantly, holding a small cabbage he'd retrieved from its hiding place, and added it to the pot. "Wonderful!!" cried the stranger. "You know, I once had stone soup with cabbage and a bit of salt beef as well, and it was fit for a king." The village butcher managed to find some salt beef . . . And so it went, through potatoes, onions, carrots, mushrooms, and so on, until there was indeed a delicious meal for everyone in the village to share. The village elder offered the stranger a great deal of money for the magic stone, but he refused to sell it and traveled on the next day. As he left, the stranger came upon a group of village children standing near the road. He gave the silken bag containing the stone to the youngest child, whispering to a group, "It was not the stone, but the villagers that had performed the magic."

By working together, everyone contributes what they can, achieving a greater good together.

E-Mail your comments to paul@nospam.buetow.org :-)

Other book notes of mine are:

2023-03-16 "The Pragmatic Programmer" book notes (You are currently reading this)

2023-04-01 "Never split the difference" book notes

2023-05-06 "The Obstacle is the Way" book notes

2023-07-17 "Software Developmers Career Guide and Soft Skills" book notes

2023-11-11 "Mind Management" book notes

2024-05-01 "Slow Productivity" book notes

2024-07-07 "The Stoic Challenge" book notes

2024-10-24 "Staff Engineer" book notes

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>How to shut down after work</title>

    <link href="gemini://foo.zone/gemfeed/2023-02-26-how-to-shut-down-after-work.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-02-26-how-to-shut-down-after-work.gmi</id>

    <updated>2023-02-26T23:48:01+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Do you need help fully discharging from work in the evenings or for the weekend? Shutting down from work won't just improve your work-life balance; it will also significantly improve the quality of your personal life and work. After a restful weekend, you will be much more energized and productive the next working day. So it should not just be in your own, but also your employers' interest that you fully relax and shut down after work. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='how-to-shut-down-after-work'>How to shut down after work</h1><br />

Published at 2023-02-26T23:48:01+02:00

Do you need help fully discharging from work in the evenings or for the weekend? Shutting down from work won't just improve your work-life balance; it will also significantly improve the quality of your personal life and work. After a restful weekend, you will be much more energized and productive the next working day. So it should not just be in your own, but also your employers' interest that you fully relax and shut down after work.

|\   "Music should be heard not only with the ears, but also the soul."

|---|-------------------------|-----------------------------------------|

| | |\ | |@ |\ |

|---|---|---------------------|-------------/|----|------|------|------|

| @| | |\ |O | 3 / | |@ | | |

|---|--@|---|----------|------|---------/----|----|------|-------|------|

| @| @| \ |O | / | | |@ @| @|. |

|-----------|-----|-----|------|-----/---|---@|----|--------------|------|

| @| | |O | | | | @|. |

|-----------|----@|-----|------|----|---@|------------------------|------|

       @|           |           |        Larry Komro         @|.     

                              -@-        [kom...@uwec.edu]

Have a routine. Try to finish work around the same time every day. Write any outstanding tasks down for the next day, so you are sure you will remember them. Writing them down brings wonders as you can remove them from your mind for the remainder of the day (or the upcoming weekend) as you know you will surely pick them up the next working day. Tidying up your workplace could also count toward your daily shutdown routine.

A commute home from the office also greatly helps, as it disconnects your work from your personal life. Don't work on your commute home, though! If you don't commute but work from home, then it helps to walk around the block or in a nearby park to disconnect from work.

Unless you are self-employed, you have likely signed an N-hour per week contract with your employer, and your regular working times are from X o'clock in the morning to Y o'clock in the evening (with M minutes lunch break in the middle). And there might be some flexibility in your working times, too. But that kind of flexibility (e.g. extending the lunch break so that there is time to pick up a family member from the airport) will be agreed upon, and you will counteract it, for example, by starting working earlier the next day or working late, that one exception. But overall, your weekly working time will stay N hours.

Another exception would be when you are on an on-call schedule and are expected to watch your work notifications out-of-office times. But that is usually only a few days per month and, therefore, not the norm. And it should also be compensated accordingly.

There might be some maintenance work you must carry out, which can only be done over the weekend, but it should be explicitly agreed upon and compensated for. Also, there might be a scenario that a production incident comes up shortly before the end of the work day, requiring you (and your colleagues) to stay a bit longer. But this should be an exceptional case.

Other than that, there is no reason why you should work out-of-office hours. I know many people who suffer "the fear of missing out", so slack messages and E-Mails are checked until late in the evening, during weekends or holidays. I have been improving here personally a lot over the last couple of months, but still, I fall into this trap occasionally.

Also, when you respond to slack messages and E-Mails, your colleagues can think that you have nothing better to do. They also will take it for granted and keep slacking and messaging you out of regular office times.

Checking for your messages constantly outside of regular office times makes it impossible to shut down and relax from work altogether.

Often, your mind goes back to work-related stuff even after work. That's normal as you concentrated highly on your work throughout the day. The brain unconsciously continues to work and will automatically present you with random work-related thoughts. You can counteract this by focusing on non-work stuff, which may include:

Some of these can be habit-stacked: Exercise could be combined with watching videos about your passion project (e.g. watching lectures about that new programming language you are currently learning for fun). With walking, for example, you could combine listening to an Audiobook or music, or you could also think about your passion project during that walk.

Even if you have children, it helps wonders to get a pet. My cat, for example, will remind me a few times daily to take a few minute's breaks to pet, play or give food. So my cat not only helps me after work but throughout the day.

My neighbour also works from home, and he has dogs, which he regularly has to take out to the park.

If you are upset about something, making it impossible to shut down from work, write down everything (e.g., with a pen in a paper journal). Writing things down helps you to "get rid" of the negative. Especially after conflicts with colleagues or company decisions, you don't agree on. This kind of self-therapy is excellent. Brainstorm all your emotions and (even if opinionated) opinions so you have everything on paper. Once done, you don't think about it so much anymore, as you know you can access that information if required. But stopping ruminating about it will be much easier now. You will likely never access that information again, though. But at least writing the thoughts down saved your day.

Write down three things which went well for the day. This helps you to appreciate the day.

Think about what's fun and motivates you. Maybe the next promotion to Principal or a Manager role isn't for you. Many fall into the trap of stressing themselves out to satisfy the employer so that the next upgrade will happen and think about it constantly, even after work. But it is more important that you enjoy your craftsmanship. Work on what you expect from yourself. Ideally, your goals should be aligned with your employer. I am not saying you should abandon everything what your manager is asking you to do, but it is, after all, your life. And you have to decide where and on what you want to work. But don't sell yourself short. Keep track of your accomplishments.

Every day you gave your best was good; the day's outcome doesn't matter. What matters is that you know you gave your best and are closer to your goals than the previous day. This gives you a sense of progress and accomplishment.

There are some days at work you feel drained afterwards and think you didn't progress towards your goals at all. It's more challenging to shut down from work after such a day. A quick hack is to work on a quick win before the end of the day, giving you a sense of accomplishment after all. Another way is to make progress on your fun passion project after work. It must not be work-related, but a sense of accomplishment will still be there.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Why GrapheneOS rox</title>

    <link href="gemini://foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.gmi" />

    <id>gemini://foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.gmi</id>

    <updated>2023-01-23T15:31:52+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>In 2021 I wrote 'On Being Pedantic about Open-Source', and there was a section 'What about mobile?' where I expressed the dilemma about the necessity of using proprietary mobile operating systems. With GrapheneOS, I found my perfect solution for personal mobile phone use. </summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='why-grapheneos-rox'>Why GrapheneOS rox</h1><br />

Published at 2023-01-23T15:31:52+02:00

In 2021 I wrote "On Being Pedantic about Open-Source", and there was a section "What about mobile?" where I expressed the dilemma about the necessity of using proprietary mobile operating systems. With GrapheneOS, I found my perfect solution for personal mobile phone use.

On Being Pedantic about Open-Source

What is GrapheneOS?

GrapheneOS is a privacy and security-focused mobile OS with Android app compatibility developed as a non-profit open-source project. It's focused on the research and development of privacy and security technologies, including substantial improvements to sandboxing, exploits mitigations and the permission model.

GrapheneOS is an independent Android distribution based on the Android Open Source Project (AOSP) but hardened in multiple ways. Other independent Android distributions, like LineageOS, are also based on AOSP, but GrapheneOS takes it further so that it can be my daily driver on my phone.

https://GrapheneOS.org

https://LineageOS.org

Art by Joan Stark

           _.===========================._

        .&#39;`  .-  - __- - - -- --__--- -.  `&#39;.

    __ / ,&#39;`     _|--|_________|--|_     `&#39;. \

  /&#39;--| ;    _.&#39;\ |  &#39;         &#39;  | /&#39;._    ; |

 //   | |_.-&#39; .-&#39;.&#39;      ___      &#39;.&#39;-. &#39;-._| |

(\)   \"` _.-` /     .-&#39;`_ `&#39;-.     \ `-._ `"/

(\)    `-&#39;    |    .&#39; .-&#39;" "&#39;-. &#39;.    |    `-`

() | / .'(3)(2)(1)'. \ |

() | / / (4) .-. \ \ |

() | | |(5) ( )'==,J | |

() | \ \ (6) '-' (0) / / |

() | \ '.(7)(8)(9).' / |

() __| '. '-....-' .' |

() /.--| '-._____.-' |

() () |_ _ __ _ __ __/|

() () | |

().._.__() | |

(\\jgs\) '.___________________.'

'-'-'-'--'

GrapheneOS allows configuring up to 32 user profiles (including a guest profile) on a single phone. A profile is a completely different environment within the phone, and it is possible to switch between them instantly. Sessions of a profile can continue running in the background or be fully terminated. Each profile can have completely different settings and different applications installed.

I use my default profile with primarily open-source applications installed, which I trust. I use another profile for banking (PayPal, various proprietary bank apps, Amazon store app, etc.) and another profile for various Google services (which I try to avoid, but I have to use once in a while). Furthermore, I have configured a profile for Social Media use (that one isn't in my default profile, as otherwise I am tempted to scroll social media all the time, which I try to avoid and only want to do intentionally when switching to the corresponding profile!).

The neat thing about the profiles is that some can run a sandboxed version of Google Play (see later in this post), while others don't. So some profiles can entirely operate without any Google Play, and only some profiles (to which I rarely switch) have Google Play enabled.

You notice how much longer (multiple days) your phone can be on a single charge when Google Play Services isn't running in the background. This tells a lot about the background activities and indicates that using Google Play shouldn't be the norm.

There's also the case that I am using an app from the Google Play store (as the app isn't available from F-Droid), which doesn't require Google Play Services to run in the background. Here's where I use the Aurora Android store. The Aurora store can be installed through F-Droid. Aurora acts as an anonymous proxy from your phone to the Google Play Store and lets you install apps from there. No Google credentials are required for that!

https://f-droid.org

There's a similar solution for watching videos on YouTube. You can use the NewPipe app (also from F-Droid), which acts as an anonymous proxy for watching videos from YouTube. So there isn't any need to install the official YouTube app, and there isn't any need to login to your Google account. What's so bad about the official app? You don't know which data it is sending about you to Google, so it is a privacy concern.

Before switching to GrapheneOS, I had been using LineageOS on one of my phones for a couple of years. Still, I always had to have a secondary personal phone with all of these proprietary apps which (partially) only work with Google Play on the phone (e.g. Banking, Navigation, various travel apps from various Airlines, etc.) somewhere around as I didn't install Google Play on my LineageOS phone due to privacy concerns and only installed apps from the F-Droid store on it. When travelling, I always had to carry around a second phone with Google Play on it, as without it; life would become inconvenient pretty soon.

With GrapheneOS, it is different. Here, I do not just have a separate user profile, "Google", for various Google apps where Google Play runs, but Google Play also runs in a sandbox!!!

GrapheneOS has a compatibility layer providing the option to install and use the official releases of Google Play in the standard app sandbox. Google Play receives no special access or privileges on GrapheneOS instead of bypassing the app sandbox and receiving a massive amount of highly privileged access. Instead, the compatibility layer teaches it how to work within the full app sandbox. It also isn't used as a backend for the OS services as it would be elsewhere since GrapheneOS doesn't use Google Play even when it's installed.

When I need to access Google Play, I can switch to the "Google" profile. Even there, Google is sandboxed to the absolute minimum permissions required to be operational, which gives additional privacy protection.

The sad truth is that Google Maps is still the best navigation app. When driving unknown routes, I can switch to my Google profile to use Google Maps. I don't need to do that when going streets I know about, but it is crucial (for me) to have Google Maps around when driving to a new destination.

Also, Google Translate and Google Lens are still the best translation apps I know. I just recently relocated to another country, where I am still learning the language, so Google Lens has been proven very helpful on various occasions by ad-hoc translating text into English or German for me.

The same applies to banking. Many banking apps require Google Play to be available (It might be even more secure to only use banking apps from the Google Play store due to official support and security updates). I rarely need to access my mobile banking app, but once in a while, I need to. As you have guessed by now, I can switch to my banking profile (with Google Play enabled), do what I need to do, and then terminate the session and go back to my default profile, and then my life can go on :-).

It is great to have the flexibility to use any proprietary Android app when needed. That only applies to around 1% of my phone usage time, but you often don't always know when you need "that one app now". So it's perfect that it's covered with the phone you always have with you.

I really want my phone to shoot good looking pictures, so that I can later upload them to the Irregular Ninja:

https://irregular.ninja

The stock camera app of the OASP could be better. Photos usually look washed out, and the app lacks features. With GrapheneOS, there are two options:

The GrapheneOS camera app is much better than the stock OASP camera app. I have been comparing the photo quality of my Pixel phone under LineageOS and GrapheneOS, and the differences are pronounced. I didn't compare the quality with the official Google camera app, but I have seen some comparison videos and the differences seem like they aren't groundbreaking.

For automatic backups of my photos, I am relying on a self-hosted instance of NextCloud (with a client app available via F-Droid). So there isn't any need to rely on any Google apps and services (Google Play Photos or Google Camera app) anymore, and that's great!

https://nextcloud.com

I also use NextCloud to synchronize my notes (NextCloud Notes), my RSS news feeds (NextCloud News) and contacts (DAVx5). All apps required are available in the F-Droid store.

Another great thing about GrapheneOS is that, besides putting your apps into different profiles, you can also restrict network access and configure storage scopes per app individually.

For example, let's say you are installing that one proprietary app from the Google Play Store through the Aurora store, and then you want to ensure that the app doesn't send data "home" through the internet. Nothing is easier to do than that. Just remove network access permissions from that only app.

The app also wants to store and read some data from your phone (e.g. it could be a proprietary app for enhancing photos, and therefore storage access to a photo folder would be required). In GrapheneOS, you can configure a storage scope for that particular app, e.g. only read and write from one folder but still forbid access to all other folders on your phone.

Termux can be installed on any Android phone through F-Droid, so it doesn't need to be a GrapheneOS phone. But I have to mention Termux here as it significantly adds value to my phone experience.

Termux is an Android terminal emulator and Linux environment app that works directly with no rooting or setup required. A minimal base system is installed automatically - additional packages are available using the APT package manager.

https://termux.dev

In short, Termux is an entire Linux environment running on your Android phone. Just pair your phone with a Bluetooth keyboard, and you will have the whole Linux experience. I am only using terminal Linux applications with Termux, though. What makes it especially great is that I could write on a new blog post (in Neovim through Termux on my phone) or do some coding whilst travelling (e.g. during a flight), or look up my passwords or some other personal documents (through my terminal-based password manager). All changes I commit to Git can be synced to the server with a simple git push once online (e.g. after the plane landed) again.

There are Pixel phones with a screen size of 6", and that's decent enough for occasional use like that, and everything (the phone, the BT keyboard, maybe an external battery pack) all fit nicely in a small travel pocket.

Strictly speaking, an Android phone is a Linux phone, but it's heavily modified and customized. For me, a "pure" Linux phone is a more streamlined Linux kernel running in a distribution like Ubuntu Touch or Mobian.

A pure Linux phone, e.g. with Ubuntu Touch installed, e.g. on a PinePhone, Fairphone, the Librem 5 or the Volla phone, is very appealing to me. And they would also provide an even better Linux experience than Termux does. Some support running LineageOS within an Anbox, enabling you to run various proprietary Android apps occasionally within Linux.

Ubuntu Touch

More Linux distributions for mobile devices

But here, Google Play would not be sandboxed; you could not configure individual network permissions and storage scopes like in GrapheneOS. Pure Linux-compatible phones usually come with a crappy camera, and the battery life is generally pretty bad (only a few hours). Also, no big tech company pushes the development of Linux phones. Everything relies on hobbyists, whereas multiple big tech companies put a lot of effort into the Android project, and a lot of code also goes into the Android Open-Source project.

Currently, pure Linux phones are only a nice toy to tinker with but are still not ready (will they ever?) to be the daily driver. SailfishOS may be an exception; I played around with it in the past. It is pretty usable, but it's not an option for me as it is partial a proprietary operating system.

SailfishOS

Sometimes, switching a profile to use a different app is annoying, and you can't copy and paste from the system clipboard from one profile to another. But that's a small price I am willing to pay!

Another thing is that GrapheneOS can only run on Google Pixel phones, whereas LineageOS can be installed on a much larger variety of hardware. But on the other hand, GrapheneOS works very well on Pixel phones. The GrapheneOS team can concentrate their development efforts on a smaller set of hardware which then improves the software's quality (best example: The camera app).

And, of course, GrapheneOS is an open-source project. This is a good thing; however, on the other side, nobody can guarantee that the OS will not break or will not damage your phone. You have to trust the GrapheneOS project and donate to the project so they can keep up with the great work. But I rather trust the GrapheneOS team than big tech.

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>(Re)learning Java - My takeaways</title>

    <link href="gemini://foo.zone/gemfeed/2022-12-24-ultrarelearning-java-my-takeaways.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-12-24-ultrarelearning-java-my-takeaways.gmi</id>

    <updated>2022-12-24T23:18:40+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>As a regular participant in the annual Pet Project competition at work, I always try to find a project where I can learn something new. In this post, I would like to share my takeaways after revisiting Java. You can read about my motivations in my 'Creative universe' post:</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='relearning-java---my-takeaways'>(Re)learning Java - My takeaways</h1><br />

Published at 2022-12-24T23:18:40+02:00

As a regular participant in the annual Pet Project competition at work, I always try to find a project where I can learn something new. In this post, I would like to share my takeaways after revisiting Java. You can read about my motivations in my "Creative universe" post:

Creative universe

I have been programming in Java back in the days as a university student, and even my Diploma Thesis I implemented in Java (it would require some overhaul so that it is fully compatible with a recent version of Java, though - It still compiles and runs, but with a lot of warnings, though!):

VS-Sim: Distributed systems simulator

However, after that, I became a Linux Sysadmin and mainly continued programming in Perl, Puppet, bash, and a little Python. For personal use, I also programmed a bit in Haskell and C. After my Sysadmin role, I moved to London and became a Site Reliability Engineer (SRE), where I mainly programmed in Ruby, bash, Puppet and Golang and a little bit of C.

At my workplace, as an SRE, I don't do Java a lot. I have been reading Java code to understand the software better so I can apply and suggest workarounds or fixes to existing issues and bugs. However, most of our stack is in Java, and our Software Engineers use Java as their primary programming language.

Over time, I had been missing out on many new features that were added to the language since Java 1.4, so I decided to implement my next Pet Project in Java and learn every further aspect of the language as my main goal. Of course, I still liked the idea of winning a Pet Project Prize, but my main objective was to level up my Java skills.

This book was recommended by my brother and also by at least another colleague at work to be one of the best, if not the best, book about Java programming. I read the whole book from the beginning to the end and immersed myself in it. I fully agree; this is a great book. Every Java developer or Java software engineer should read it!

I recommend reading the 90-part effective Java Series on dev.to. It's a perfect companion to the book as it explains all the chapters again but from a slightly different perspective and helps you to really understand the content.

Kyle Carter's 90-part Effective Java Series

During my lunch breaks, I usually have a walk around the block or in a nearby park. I used that time to listen to the Java Pub House podcast. I listened to every episode and learned tons of new stuff. I can highly recommend this podcast. Especially GraalVM, a high-performance JDK distribution written for Java and other JVM languages, captured my attention. GraalVM can compile Java code into native binaries, improving performance and easing the distribution of Java programs. Because of the latter, I should release a VS-Sim GraalVM edition one day through a Linux AppImage ;-).

https://www.javapubhouse.com

https://www.graalvm.org

I also watched a course on O'Reilly Safari Books online about Java Concurrency. That gave an excellent refresher on how the Java thread pools work and what were the concurrency primitives available in the standard library.

First, the source code is often the best documentation (if programmed nicely), and second, it helps to get the hang of the language and standard practices. I started to read more and more Java code at work. I did that whenever I had to understand how something, in particular, worked (e.g. while troubleshooting and debugging an issue).

Another great way to get the hang of Java again was to sneak into the code reviews of the Software Engineer colleagues. They are the expert on the matter and are a great source to copy knowledge. It's OK to stay passive and only follow the reviews. Sometimes, it's OK to step up and take ownership of the review. The developers will also always be happy to answer any naive questions which come up.

Besides my Pet Project, I also took ownership of a regular roadmap Java project at work, making an internal Java service capable of running in Kubernetes. This was a bunch of minor changes and adding a bunch of classes and unit tests dealing with the statelessness and a persistent job queue in Redis. The job also involved reading and understanding a lot of already existing Java code. It wasn't part of my job description, but it was fun, and I learned a lot. The service runs smoothly in production now. Of course, all of my code got reviewed by my Software Engineering colleagues.

From the new language features and syntaxes, there are many personal takeaways, and I can't possibly list them all, but here are some of my personal highlights:

There are also many ugly corners in Java. Many are doomed to stay there forever due to historical decisions and ensuring backward compatibility with older versions of the Java language and the Java standard library.

While (re)learning Java, I felt like a student again and was quite enthusiastic about it initially. I invested around half a year, immersing myself intensively in Java (again). The last time I did that was many years ago as a university student. I even won a Silver Prize at work, implementing a project this year (2022 as of writing this). I feel confident now with understanding, debugging and patching Java code at work, which boosted my debugging and troubleshooting skills.

I don't hate Java, but I don't love programming in it, either. I will, I guess, always see Java as the necessary to get stuff done (reading code to understand how the service works, adding a tiny feature to make my life easier, adding a quick bug fix to overcome an obstacle...).

Although Java has significantly improved since 1.4, its code still tends to be more boilerplate. Not mainly because due to lines of code (Golang code tends to be quite repetitive, primarily when no generics are used), but due to the levels of abstractions it uses. Class hierarchies can be ten classes or deeper, and it is challenging to understand what the code is doing. Good test coverage and much documentation can mitigate the problem partially. Big enterprises use Java, and that also reflects to the language. There are too many libraries and too many abstractions that are bundled with too many legacy abstractions and interfaces and too many exceptions in the library APIs. There's even an external library named Lombok, which aims to reduce Java boilerplate code. Why is there a need for an external library? It should be all part of Java itself.

https://projectlombok.org/

Java needs a clean cut. The clean cut shall be incompatible with previous versions of Java and only promote modern best practices without all the legacy burden carried around. The same can be said for other languages, e.g. Perl, but in Perl, they already attack the problem with the use of flags which change the behaviour of the language to more modern standards. Or do it like Python, where they had a hard (incompatible) cut from version 2 to version 3. It will be painful, for sure. But that would be the only way I would enjoy using that language as one of my primary languages to code new stuff regularly. Currently, my Java will stay limited to very few projects and the more minor things already mentioned in this post.

Am I a Java expert now? No, by far not. But I am better now than before :-).

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>I tried (Doom) Emacs, but I switched back to (Neo)Vim</title>

    <link href="gemini://foo.zone/gemfeed/2022-11-24-i-tried-emacs-but-i-switched-back-to-neovim.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-11-24-i-tried-emacs-but-i-switched-back-to-neovim.gmi</id>

    <updated>2022-11-24T11:17:15+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>As a long-lasting user of Vim (and NeoVim), I always wondered what GNU Emacs is really about, so I decided to try it. I didn't try vanilla GNU Emacs, but Doom Emacs. I chose Doom Emacs as it is a neat distribution of Emacs with Evil mode enabled by default. Evil mode allows Vi(m) key bindings (so to speak, it's emulating Vim within Emacs), and I am pretty sure I won't be ready to give up all the muscle memory I have built over more than a decade.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='i-tried-doom-emacs-but-i-switched-back-to-neovim'>I tried (Doom) Emacs, but I switched back to (Neo)Vim</h1><br />

Published at 2022-11-24T11:17:15+02:00; Updated at 2022-11-26

As a long-lasting user of Vim (and NeoVim), I always wondered what GNU Emacs is really about, so I decided to try it. I didn't try vanilla GNU Emacs, but Doom Emacs. I chose Doom Emacs as it is a neat distribution of Emacs with Evil mode enabled by default. Evil mode allows Vi(m) key bindings (so to speak, it's emulating Vim within Emacs), and I am pretty sure I won't be ready to give up all the muscle memory I have built over more than a decade.

GNU Emacs

Doom Emacs

I used Doom Emacs for around two months. Still, ultimately I decided to switch back to NeoVim as my primary editor and IDE and Vim (usually pre-installed on Linux-based systems) and Nvi (usually pre-installed on *BSD systems) as my "always available editor" for quick edits. (It is worth mentioning that I don't have a high opinion on whether Vim or NeoVim is the better editor, I prefer NeoVim as it comes with better defaults out of the box, but there is no real blocker to use Vim instead).

Vim

NeoVim

So why did I switch back to the Vi-family?

         _/  \    _(\(o

         /     \  /  _  ^^^o

        /   !   \/  ! &#39;!!!v&#39;

       !  !  \ _&#39; ( \____

       ! . \ _!\   \===^\)

Art by \ _! / __!

Gunnar Z. ! / \ <--- Emacs is a giant dragon

   (\_      _/   _\ )

    \ ^^--^^ __-^ /(__ 

     ^^----^^    "^--v&#39;

Emacs feels like a giant dragon as it is much more than an editor or an integrated development environment. Emacs is a whole platform on its own. There's an E-Mail client, an IRC client, or even games you can run within Emacs. And you can also change Emacs within Emacs using its own Lisp dialect, Emacs Lisp (Emacs is programmed in Emacs Lisp). Therefore, Emacs is also its own programming language. You can change every aspect of Emacs within Emacs itself. People jokingly state Emacs is an operating system and that you should directly use it as the init 1 process (if you don't know what the init 1 process is: Under UNIX and similar operating systems, it's the very first userland processed launched. That's usually systemd on Linux-based systems, launchd on macOS, or any other init script or init system used by the OS)!

In many aspects, Emacs is like shooting at everything with a bazooka! However, I prefer it simple. I only wanted Emacs to be a good editor (which it is, too), but there's too much other stuff in Emacs that I don't need to care about! Vim and NeoVim do one thing excellent: Being great text editors and, when loaded with plugins, decent IDEs, too.

I almost fell in love with Magit, an integrated Git client for Emacs. But I think the best way to interact with Git is to use the git command line directly. I don't worry about typing out all the commands, as the most commonly used commands are in my shell history. Other useful Git programs I use frequently are bit and tig. Also, get a mechanical keyboard that makes hammering whole commands into the terminal even more enjoyable.

Magit

Tig

Magit is pretty neat for basic Git operations, but I found myself searching the internet for the correct sub-commands to do the things I wanted to do in Git. Mainly, the way how branches are managed is confusing. Often, I fell back to the command line to fix up the mess I produced with Magit (e.g. accidentally pushing to the wrong remote branch, so I found myself fixing things manually on the terminal with the git command with forced pushes....). Magit is hotkey driven, and common commands are quickly explorable through built-in hotkey menus. Still, I found it challenging to navigate to more advanced Git sub-commands that way which was much easier accomplished by using the git command directly.

If there is one thing I envy about Emacs is that it's a graphical program, whereas the Vi-family of editors are purely terminal-based. I see the benefits of being a graphical program as this enables the use of multiple fonts simultaneously to embed pictures and graphs (that would be neat as a Markdown preview, for example). There's also GVim (Vim with GTK UI), but that's more of an afterthought.

There are now graphical front-end clients for NeoVim, but I still need to dig into them. Let me know your experience if you have one. Luckily, I don't rely on something graphical in my text editor, but it would improve how the editor looks and feels. UTF8 can already do a lot in the terminal, and terminal emulators also allow you to use TrueType fonts. Still, you will always be limited to one TTF font for the whole terminal, and it isn't possible to have, for example, a different font for headings, paragraphs, etc... you get the idea. TTF+UTF8 can't beat authentic graphics.

It is possible to customize every aspect of Emacs through Emacs Lisp. I have done some Elk Scheme programming in the past (a dialect of Lisp), but that was a long time ago, and I am not willing to dive here again to customize my environment. I would instead take the pragmatic approach and script what I need in VimScript (a terrible language, but it gets the job done!). I watched Damian Conway's VimScript course on O'Reilly Safari Books Online, which I greatly recommend. Yes, VimScript feels clunky, funky and weird and is far less elegant than Lisp, but it gets its job done - in most cases! (That reminds me that the Vim team has announced a new major version of VimScript with improvements and language changes made - I haven't gotten to it yet - but I assume that VimScript will always stay VimScript).

Emacs Lisp

Elk Scheme

VimScript

Scripting Vim by Damian Conway

NeoVim is also programmable with Lua, which seems to be a step up and Vim comes with a Perl plugin API (which was removed from NeoVim, but that is a different story - why would someone remove the most potent mature text manipulation programming language from one of the most powerful text editors?).

NeoVim Lua API

One example is my workflow of how I compose my blog articles (e.g. this one you are currently reading): I am writing everything in NeoVim, but I also want to have every paragraph checked against Grammarly (as English is not my first language). So I write a whole paragraph, then I select the entire paragraph via visual selection with SHIFT+v, and then I press ,y to yank the paragraph to the systems clipboard, then I paste the paragraph to Grammarly's browser window with CTRL+v, let Grammarly suggest the improvements, and then I copy the result back with CTRL+c to the system clipboard and in NeoVim I type ,i to insert the result back overriding the old paragraph (which is still selected in visual mode) with the new content. That all sounds a bit complicated, but it's surprisingly natural and efficient.

To come back to the example, for the clipboard integration, I use this small VimScript snippet, and I didn't have to dig into any Lisp or Perl for this:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

vnoremap ,y !pbcopy<CR>ugv

vnoremap ,i !pbpaste<CR>

nmap ,i !wpbpaste<CR>

That's only a very few lines and does precisely what I want. It's quick and dirty but get's the job done! If VimScript becomes too cumbersome, I can use Lua for NeoVim scripting.

Org-mode is an Emacs mode for keeping notes, authoring documents, computational notebooks, literate programming, maintaining to-do lists, planning projects, and more — in a fast and effective plain-text system. There's even a dedicated website for it:

https://orgmode.org/

In short, Org-mode is an "interactive markup language" that helps you organize everything mentioned above. I rarely touched the surface during my two-month experiment with Emacs, and I am impressed by it, so I see the benefits of having that. But it's not for me.

I use "Dead Tree Mode" to organize my work and notes. Dead tree? Yeah, I use an actual pen and a real paper journal (Leuchtturm or a Moleskine and a set of coloured 0.5 Muji Pens are excellent choices). That's far more immersive and flexible than a computer program can ever be. Yes, some automation and interaction with the computer (like calendar scheduling etc.) are missing. Still, an actual paper journal forces you to stay simple and focus on the actual work rather than tinkering with your computer program. (But I could not resist, and I wrote a VimScript which parses a table of contents page in Markdown format of my scanned paper journals, and NeoVim allows me to select a topic so that the corresponding PDF scan on the right journal page gets opened in an external PDF viewer (the PDF viewer is zathura, it uses Vi-keybindings, of course) :-). (See the appendix of this blog post for that script).

Zathura

On the road, I also write some of my notes in Markdown format to NextCloud Notes, which is editable from my phone and via NeoVim on my computers. Markdown is much less powerful than Org-mode, but I prefer it the simple way. There's a neat terminal application, ranger, which I use to browse my NextCloud Notes when they are synced to a local folder on my machine. ranger is a file manager inspired by Vim and therefore makes use of Vim keybindings and it feels just natural to me.

Ranger - A Vim inspired file manager

Did I mention that I also use my zsh (my default shell) and my tmux (terminal multiplexer) in Vi-mode?

Z shell

tmux terminal multiplexer

I am not ready to dive deep into the whole world of Emacs. I prefer small and simple tools as opposed to complex tools. Emacs comes with many features out of the box, whereas in Vim/NeoVim, you would need to install many plugins to replicate some of the behaviour. Yes, I need to invest time managing all the Vim/NeoVim plugins I use, but I feel more in control compared to Doom Emacs, where a framework around vanilla Emacs manages all the plugins. I could use vanilla Emacs and manage all my plugins the vanilla way, but for me, it's not worth the effort to learn and dive into that as all that I want to do I can already do with Vim/NeoVim.

I am not saying that Vim/NeoVim are simple programs, but they are much simpler than Emacs with much smaller footprints; furthermore, they appear to be more straightforward as I am used to them. I only need Vim/NeoVim to be an editor, an IDE (through some plugins), and nothing more.

I understand the Emacs users now. Emacs is an incredibly powerful platform for almost everything, not just text editing. With Emacs, you can do nearly everything (Writing, editing, programming, calendar scheduling and note taking, Jira integration, playing games, listening to music, reading/writing emails, browsing the web, using as a calculator, generating HTML pages, configuring interactive menus, jumping around between every feature and every file within one single session, chat on IRC, surf the Gopherspace, ... the options are endless....). If you want to have one piece of software which rules it all and you are happy to invest a large part of your time in your platform: Pick Emacs, and over time Emacs will become "your" Emacs, customized to your own needs and change the way it works, which makes the Emacs users stick even more to it.

Vim/NeoVim also comes with a very high degree of customization options, but to a lesser extreme than Emacs (but still, a much higher degree than most other editors out there). If you want the best text editor in the world, which can also be tweaked to be a decent IDE, you are only looking for: Pick Vim or NeoVim! You would also need to invest a lot of time in learning, tweaking and customizing Vim/NeoVim, but that's a little more straightforward, and the result is much more lightweight once you get used to the "Vi way of doing things" you never would want to change back. I haven't tried the Emacs vanilla keystrokes, but they are terrible (that's probably one of the reasons why Doom Emacs uses Vim keybindings by default).

Update: One reader recommended to have a look at NvChad. NvChad is a NeoVim config written in Lua aiming to provide a base configuration with very beautiful UI and blazing fast startuptime (around 0.02 secs ~ 0.07 secs). They tweak UI plugins such as telescope, nvim-tree, bufferline etc well to provide an aesthetic UI experience. That sounds interesting!

https://github.com/NvChad/NvChad

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Installing DTail on OpenBSD</title>

    <link href="gemini://foo.zone/gemfeed/2022-10-30-installing-dtail-on-openbsd.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-10-30-installing-dtail-on-openbsd.gmi</id>

    <updated>2022-10-30T11:03:19+02:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='installing-dtail-on-openbsd'>Installing DTail on OpenBSD</h1><br />

Published at 2022-10-30T11:03:19+02:00

This will be a quick blog post, as I am busy with my personal life now. I have relocated to a different country and am still busy arranging things. So bear with me :-)

In this post, I want to give a quick overview (or how-to) about installing DTail on OpenBSD, as the official documentation only covers Red Hat and Fedora Linux! And this blog post will also be used as my reference!

https://dtail.dev

I am using Rexify for my OpenBSD automation. Check out the following article covering my Rex setup in a little bit more detail:

Let's Encrypt with OpenBSD and Rex

I will also mention some relevant Rexfile snippets in this post!

   ,_---~~~~~----._

,,,^____ _____``g",

/ __/ /' ^. / \ ^@q f

@f | | | | 0 _/

`/ ~__((@/ __ __((@/ \

| l__l I <--- The Go Gopher

} [______] I

] | | | |

] ~ ~ |

| |

| |

| | A ;

                       _|\,&#39;. /|      /|   `/|-.

                   \`.&#39;    /|      ,            `;.

                  ,&#39;\   A     A         A   A _ /| `.;

                ,/  _              A       _  / _   /|  ;

               /\  / \   ,  ,           A  /    /     `/|

              /_| | _ \         ,     ,             ,/  \

             // | |/ `.\  ,-      ,       ,   ,/ ,/      \/

             / @| |@  / /&#39;   \  \      ,              &gt;  /|    ,--.

            |\_/   \_/ /      |  |           ,  ,/        \  ./&#39; __:..

            |  __ __  |       |  | .--.  ,         &gt;  &gt;   |-&#39;   /     `

          ,/| /  &#39;  \ |       |  |     \      ,           |    /

         /  |&lt;--.__,-&gt;|       |  | .    `.        &gt;  &gt;    /   (

        /_,&#39; \\  ^  /  \     /  /   `.    &gt;--            /^\   |

              \\___/    \   /  /      \__&#39;     \   \   \/   \  |

               `.   |/          ,  ,                  /`\    \  )

                 \  &#39;  |/    ,       V    \          /        `-\

OpenBSD Puffy ---> `|/ ' V V \ .' _

                   &#39;`-.       V       V        \./&#39;\

                       `|/-.      \ /   \ /,---`\         kat

                        /   `._____V_____V&#39;

                                   &#39;     &#39;

First of all, DTail needs to be downloaded and compiled. For that, git, go, and gmake are required:

$ doas pkg_add git go gmake

I am happy that the Go Programming Language is readily available in the OpenBSD packaging system. Once the dependencies got installed, clone DTail and compile it:

$ mkdir git

$ cd git

$ git clone https://github.com/mimecast/dtail

$ cd dtail

$ gmake

You can verify the version by running the following command:

$ ./dtail --version

DTail 4.1.0 Protocol 4.1 Have a lot of fun!

$ file dtail

dtail: ELF 64-bit LSB executable, x86-64, version 1

Now, there isn't any need anymore to keep git, go and gmake, so they can be deinstalled now:

$ doas pkg_delete git go gmake

One day I shall create an official OpenBSD port for DTail.

Installing the binaries is now just a matter of copying them to /usr/local/bin as follows:

$ for bin in dserver dcat dgrep dmap dtail dtailhealth; do

doas cp -p $bin /usr/local/bin/$bin

doas chown root:wheel /usr/local/bin/$bin

done

Also, we will be creating the _dserver service user:

$ doas adduser -class nologin -group _dserver -batch _dserver

$ doas usermod -d /var/run/dserver/ _dserver

The OpenBSD init script is created from scratch (not part of the official DTail project). Run the following to install the bespoke script:

$ cat <<'END' | doas tee /etc/rc.d/dserver

!/bin/ksh

daemon="/usr/local/bin/dserver"

daemon_flags="-cfg /etc/dserver/dtail.json"

daemon_user="_dserver"

. /etc/rc.d/rc.subr

rc_reload=NO

rc_pre() {

install -d -o _dserver /var/log/dserver

install -d -o _dserver /var/run/dserver/cache

}

rc_cmd $1 &

END

$ doas chmod 755 /etc/rc.d/dserver

This is the task for setting it up via Rex. Note the . . . ., that's a placeholder which we will fill up more and more during this blog post:

desc 'Setup DTail';

task 'dtail', group => 'frontends',

sub {

  my $restart = FALSE;

  file &#39;/etc/rc.d/dserver&#39;:

    content =&gt; template(&#39;./etc/rc.d/dserver.tpl&#39;),

    owner =&gt; &#39;root&#39;,

    group =&gt; &#39;wheel&#39;,

    mode =&gt; &#39;755&#39;,

    on_change =&gt; sub { $restart = TRUE };

    .

    .

    .

    .

  service &#39;dserver&#39; =&gt; &#39;restart&#39; if $restart;

  service &#39;dserver&#39;, ensure =&gt; &#39;started&#39;;

};

Now, DTail is fully installed but still needs to be configured. Grab the default config file from GitHub ...

$ doas mkdir /etc/dserver

$ curl https://raw.githubusercontent.com/mimecast/dtail/master/examples/dtail.json.examples |

doas tee /etc/dserver/dtail.json

... and then edit it and adjust LogDir in the Common section to /var/log/dserver. The result will look like this:

"Common": {

"LogDir": "/var/log/dserver",

"Logger": "Fout",

"LogRotation": "Daily",

"CacheDir": "cache",

"SSHPort": 2222,

"LogLevel": "Info"

}

That's as simple as adding the following to the Rex task:

file '/etc/dserver',

ensure => 'directory';

file '/etc/dserver/dtail.json',

content => template('./etc/dserver/dtail.json.tpl'),

owner => 'root',

group => 'wheel',

mode => '755',

on_change => sub { $restart = TRUE };

DTail relies on SSH for secure authentication and communication. However, the system user _dserver has no permission to read the SSH public keys from the user's home directories, so the DTail server also checks for available public keys in an alternative path /var/run/dserver/cache.

The following script, populating the DTail server key cache, can be run periodically via CRON:

$ cat <<'END' | doas tee /usr/local/bin/dserver-update-key-cache.sh

!/bin/ksh

CACHEDIR=/var/run/dserver/cache

DSERVER_USER=_dserver

DSERVER_GROUP=_dserver

echo 'Updating SSH key cache'

ls /home/ | while read remoteuser; do

keysfile=/home/$remoteuser/.ssh/authorized_keys

if [ -f $keysfile ]; then

    cachefile=$CACHEDIR/$remoteuser.authorized_keys

    echo "Caching $keysfile -&gt; $cachefile"

    cp $keysfile $cachefile

    chown $DSERVER_USER:$DSERVER_GROUP $cachefile

    chmod 600 $cachefile

fi

done

Cleanup obsolete public SSH keys

find $CACHEDIR -name *.authorized_keys -type f |

while read cachefile; do

remoteuser=$(basename $cachefile | cut -d. -f1)

keysfile=/home/$remoteuser/.ssh/authorized_keys

if [ ! -f $keysfile ]; then

    echo &#39;Deleting obsolete cache file $cachefile&#39;

    rm $cachefile

fi

done

echo 'All set...'

END

$ doas chmod 500 /usr/local/bin/dserver-update-key-cache.sh

Note that the script above is a slight variation of the official DTail script. The official DTail one is a bash script, but on OpenBSD, there's ksh. I run it once daily by adding it to the daily.local:

$ echo /usr/local/bin/dserver-update-key-cache.sh | doas tee -a /etc/daily.local

/usr/local/bin/dserver-update-key-cache.sh

That's done by adding ...

file '/usr/local/bin/dserver-update-key-cache.sh',

content => template('./scripts/dserver-update-key-cache.sh.tpl'),

owner => 'root',

group => 'wheel',

mode => '500';

append_if_no_such_line '/etc/daily.local', '/usr/local/bin/dserver-update-key-cache.sh';

... to the Rex task!

Now, it's time to enable and start the DTail server:

$ sudo rcctl enable dserver

$ sudo rcctl start dserver

$ tail -f /var/log/dserver/*.log

INFO|1022-090634|Starting scheduled job runner after 2s

INFO|1022-090634|Starting continuous job runner after 2s

INFO|1022-090644|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0

INFO|1022-090654|24204|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0

INFO|1022-090719|Starting server|DTail 4.1.0 Protocol 4.1 Have a lot of fun!

INFO|1022-090719|Generating private server RSA host key

INFO|1022-090719|Starting server

INFO|1022-090719|Binding server|0.0.0.0:2222

INFO|1022-090719|Starting scheduled job runner after 2s

INFO|1022-090719|Starting continuous job runner after 2s

INFO|1022-090729|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnections=0

INFO|1022-090739|86050|stats.go:53|2|11|7|||MAPREDUCE:STATS|currentConnections=0|lifetimeConnect

.

.

.

Ctr+C

As we don't want to wait until tomorrow, let's populate the key cache manually:

$ doas /usr/local/bin/dserver-update-key-cache.sh

Updating SSH key cache

Caching /home/_dserver/.ssh/authorized_keys -> /var/cache/dserver/_dserver.authorized_keys

Caching /home/admin/.ssh/authorized_keys -> /var/cache/dserver/admin.authorized_keys

Caching /home/failunderd/.ssh/authorized_keys -> /var/cache/dserver/failunderd.authorized_keys

Caching /home/git/.ssh/authorized_keys -> /var/cache/dserver/git.authorized_keys

Caching /home/paul/.ssh/authorized_keys -> /var/cache/dserver/paul.authorized_keys

Caching /home/rex/.ssh/authorized_keys -> /var/cache/dserver/rex.authorized_keys

All set...

The DTail server is now ready to serve connections. You can use any DTail commands, such as dtail, dgrep, dmap, dcat, dtailhealth, to do so. Checkout out all the usage examples on the official DTail page.

I have installed DTail server this way on my personal OpenBSD frontends blowfish, and fishfinger, and the following command connects as user rex to both machines and greps the file /etc/fstab for the string local:

❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab

CLIENT|earth|WARN|Encountered unknown host|{blowfish.buetow.org:2222 0xc0000a00f0 0xc0000a61e0 [blowfish.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN [23.88.35.144]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9ZnF/LAk14SgqCzk38yENVTNfqibcluMTuKx1u53cKSp2xwHWzy0Ni5smFPpJDIQQljQEJl14ZdXvhhjp1kKHxJ79ubqRtIXBlC0PhlnP8Kd+mVLLHYpH9VO4rnaSfHE1kBjWkI7U6lLc6ks4flgAgGTS5Bb7pLAjwdWg794GWcnRh6kSUEQd3SftANqQLgCunDcP2Vc4KR9R78zBmEzXH/OPzl/ANgNA6wWO2OoKKy2VrjwVAab6FW15h3Lr6rYIw3KztpG+UMmEj5ReexIjXi/jUptdnUFWspvAmzIl6kwzzF8ExVyT9D75JRuHvmxXKKjyJRxqb8UnSh2JD4JN 0xc0000a2180}

CLIENT|earth|WARN|Encountered unknown host|{fishfinger.buetow.org:2222 0xc0000a0150 0xc000460110 [fishfinger.buetow.org]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L [46.23.94.99]:2222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNiikdL7+tWSN0rCaw1tOd9aQgeUFgb830V9ejkyJ5h93PKLCWZSMMCtiabc1aUeUZR//rZjcPHFLuLq/YC+Y3naYtGd6j8qVrcfG8jy3gCbs4tV9SZ9qd5E24mtYqYdGlee6JN6kEWhJxFkEwPfNlG+YAr3KC8lvEAE2JdWvaZavqsqMvHZtAX3b25WCBf2HGkyLZ+d9cnimRUOt+/+353BQFCEct/2mhMVlkr4I23CY6Tsufx0vtxx25nbFdZias6wmhxaE9p3LiWXygPWGU5iZ4RSQSImQz4zyOc9rnJeP1rwGk0OWDJhdKNXuf0kIPdzMfwxv2otgY32/DJj6L 0xc0000a2240}

Encountered 2 unknown hosts: 'blowfish.buetow.org:2222,fishfinger.buetow.org:2222'

Do you want to trust these hosts?? (y=yes,a=all,n=no,d=details): a

CLIENT|earth|INFO|STATS:STATS|cgocalls=11|cpu=8|connected=2|servers=2|connected%=100|new=2|throttle=0|goroutines=19

CLIENT|earth|INFO|Added hosts to known hosts file|/home/paul/.ssh/known_hosts

REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2

REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2

Running it the second time, and given that you trusted the keys the first time, it won't prompt you for the host keys anymore:

❯ ./dgrep -user rex -servers blowfish.buetow.org,fishfinger.buetow.org --regex local /etc/fstab

REMOTE|blowfish|100|7|fstab|31bfd9d9a6788844.h /usr/local ffs rw,wxallowed,nodev 1 2

REMOTE|fishfinger|100|7|fstab|093f510ec5c0f512.h /usr/local ffs rw,wxallowed,nodev 1 2

It's a bit of manual work, but it's ok on this small scale! I shall invest time in creating an official OpenBSD port, though. That would render most of the manual steps obsolete, as outlined in this post!

Check out the following for more information:

https://dtail.dev

https://github.com/mimecast/dtail

https://www.rexify.org

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-22 DTail - The distributed log tail program

2022-03-06 The release of DTail 4.0.0

2022-10-30 Installing DTail on OpenBSD (You are currently reading this)

2023-09-25 DTail usage examples

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>After a bad night's sleep</title>

    <link href="gemini://foo.zone/gemfeed/2022-09-30-after-a-bad-nights-sleep.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-09-30-after-a-bad-nights-sleep.gmi</id>

    <updated>2022-09-30T09:53:23+03:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>Everyone has it once in a while: A bad night's sleep. Here I attempt to list valuable tips on how to deal with it.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='after-a-bad-night-s-sleep'>After a bad night&#39;s sleep</h1><br />

Published at 2022-09-30T09:53:23+03:00; Updated at 2022-10-12

Everyone has it once in a while: A bad night's sleep. Here I attempt to list valuable tips on how to deal with it.

           z

            z

             Z

   .--.  Z Z

  / _(c\   .-.     __

 | / /  &#39;-;   \&#39;-&#39;`  `\______

 \_\/&#39;/ __/ )  /  )   |      \--,

 | \`""`__-/ .&#39;--/   /--------\  \

  \\`  ///-\/   /   /---;-.    &#39;-&#39;

jgs (________\ \

                         &#39;-&#39;

Don't take a day off after not sleeping enough the previous night. That would be wasting the holiday allowance. It wouldn't be possible to enjoy my free time anyway, so why not just work? There's still a way for an IT Engineer to be productive (sometimes even more) with half or less of the concentration power available!

Probably I am already awake early and am unable to fall asleep again. My strategy here is to "attack" the day: Start work early and finish early. The early bird will also encounter fewer distractions from colleagues.

There's never a shortage of small items to hook off my list. Most of these items don't require my full concentration power, and I will be happy to get them off my list so that the next day, after a good night's sleep, I can immerse myself again in focused, deep work with all concentration powers at hand.

Examples of "small work items" are:

I find it easy to enter the "flow state" after a bad night's sleep. All I need to do is to put on some ambient music (preferably instrumental chill house) and start to work on a not-too-difficult ticket.

Usually, the "flow state" is associated with deep-focused work, but deep-focused work isn't easily possible under sleep deprivation. It's still possible to be in the flow by working on more manageable tasks and leaving the difficult ones for the next day.

I find engaging in discussions and demanding meetings challenging after a lousy night's sleep. I still attend the sessions I am invited to as "only" a participant, but I prefer to reschedule all meetings I am the primary driver of.

This, unfortunately, also includes interviews. Interviews require full concentration power. So for interviews, I would find a colleague to step in for me or ask to reschedule the interview altogether. Everything else wouldn't make it justice and would waste everyone's time!

The mind works differently under sleep deprivation: It's easier to invent new stuff as it's easier to have a look at things from different perspectives. Until an hour ago, I didn't know yet what I would be blogging about for this month, and then I just started writing this, and it took me only half an hour to write the first draft of this blog post!

I don't eat breakfast, and I don't eat lunch on these days. I only have dinner. Not eating means my mind doesn't get foggy, and I keep up the work momentum. This is called intermittent fasting, which not only generally helps to keep the weight under control and boosts the concentration power. Furthermore, intermittent fasting is healthy. You should include it in your routine, even after a good night's sleep.

I won't have enough energy for strenuous physical exercise on those days, but a 30 to a 60-minute stretching session can make the day. Stretching will even hurt less under sleep deprivation! The stretching could also be substituted with a light Yoga session.

Walking is healthy, and the time can be used to listen to interesting podcasts. The available concentration power might not be enough for more sophisticated audio literature. I will have enough energy for one or two daily walks (~10k steps for the day in total). Sometimes, I listen to music during walks. I also try to catch the bright sunlight.

I don't think that Red Bull is a healthy drink. But once in a while, a can in the early afternoon brings wonders, and productivity will skyrocket. Other than Red Bull, drink a lot of water throughout the day. Don't forget to drink the sugar-free version; otherwise, your intermittent fast will be broken.

I don't know how to "enforce" a nap, but sometimes I manage to power nap, and it helps wonders. A 30-minute nap sometimes brings me back to normal. If you don't tend to fast as you are too hungry, it helps to try to nap approximately 30 minutes after eating something.

It's much more challenging to keep the mind "under control" in this state. Every annoyance can potentially upset, which could reflect on the work colleagues. It is wise to attempt to go with a positive attitude into the day, always smile and be polite to the family and colleagues at work. Don't let anything drop out to the people next; they don't deserve it as they didn't do anything wrong! Also, remember, it can't be controlled at all. It's time to let go of the annoyances for the day.

To keep the good vibe, it helps to meditate for 10 minutes. Meditation must nothing be fancy. It can be just lying on the sofa and observing your thoughts as they come and go. Don't judge your thoughts, as that could put you in a negative mood. It's not necessary to sit in an uncomfortable Yoga pose, and it is not required to chant "Ohhmmmmm".

Sometimes something requiring more concentration power demands time. This is where it helps to write a note in a journal and return to it another day. This doesn't mean slacking off but managing the rarely available concentration power for the day. I might repeat myself: Today, sweat all the small stuff. Tomorrow, do the deep-focused work on that crucial project again.

It's easier to forget things on those days, so everything should be written down so that it can be worked off later. Things written down will not be overlooked!

I wouldn't say I like checking social media, as it can consume a lot of time and can become addictive. But once in a while, I want to catch up with my "networks". After a bad night's sleep, it's the perfect time to check your social media. Once done, you don't have to do it anymore for the next couple of days!

E-Mail your comments to paul@nospam.buetow.org :-)

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Gemtexter 1.1.0 - Let's Gemtext again</title>

    <link href="gemini://foo.zone/gemfeed/2022-08-27-gemtexter-1.1.0-lets-gemtext-again.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-08-27-gemtexter-1.1.0-lets-gemtext-again.gmi</id>

    <updated>2022-08-27T18:25:57+01:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I proudly announce that I've released Gemtexter version `1.1.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='gemtexter-110---let-s-gemtext-again'>Gemtexter 1.1.0 - Let&#39;s Gemtext again</h1><br />

Published at 2022-08-27T18:25:57+01:00

I proudly announce that I've released Gemtexter version 1.1.0. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown written in GNU Bash.

https://codeberg.org/snonux/gemtexter

It has been around a year since I released the first version 1.0.0. Although, there aren't any groundbreaking changes, there have been a couple of smaller commits and adjustments. I was quite surprised that I received a bunch of feedback and requests about Gemtexter so it means that I am not the only person in the universe actually using it.

-=[ typewriter ]=- 1/98

   .-------.

  _|~~ ~~  |_

=(_|_______|_)=

  |:::::::::|

  |:::::::[]|

  |o=======.|

jgs """""""""

Gemtexter relies on the GNU versions of the tools grep, sed and date and it also requires the Bash shell in version 5 at least. That's now done in the check_dependencies() function:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

<i><font color="silver"># At least, Bash 5 is required</font></i>

<b><u><font color="#000000">local</font></u></b> -i required_version=<font color="#000000">5</font>

IFS=. <b><u><font color="#000000">read</font></u></b> -ra version &lt;&lt;&lt; <font color="#808080">"$BASH_VERSION"</font>

<b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"${version[0]}"</font> -lt $required_version ]; <b><u><font color="#000000">then</font></u></b>

    log ERROR <font color="#808080">"ERROR, </font>\"<font color="#808080">bash</font>\"<font color="#808080"> must be at least at major version $required_version!"</font>

    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">2</font>

<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># These must be the GNU versions of the commands</font></i>

<b><u><font color="#000000">for</font></u></b> tool <b><u><font color="#000000">in</font></u></b> $DATE $SED $GREP; <b><u><font color="#000000">do</font></u></b>

    <b><u><font color="#000000">if</font></u></b> ! $tool --version | grep -q GNU; <b><u><font color="#000000">then</font></u></b>

        log ERROR <font color="#808080">"ERROR, </font>\"<font color="#808080">$tool</font>\"<font color="#808080"> command is not the GNU version, please install!"</font>

        <b><u><font color="#000000">exit</font></u></b> <font color="#000000">2</font>

    <b><u><font color="#000000">fi</font></u></b>

<b><u><font color="#000000">done</font></u></b>

}

Especially macOS users didn't read the README carefully enough to install GNU Grep, GNU Sed and GNU Date before using Gemtexter.

The Gemtext format doesn't support inline code blocks, but Gemtexter now produces inline code blocks (means, small code fragments can be placed in the middle of a paragraph) in the HTML output when the code block is enclosed with Backticks. There were no adjustments required for the Markdown output format, because Markdown supports it already out of the box.

The Bash is not the most performant language. Gemtexter already takes a couple of seconds only to generate the Atom feed for around two hand full of articles on my slightly underpowered Surface Go 2 Linux tablet. Therefore, I introduced a cache, so that subsequent Atom feed generation runs finish much quicker. The cache uses a checksum of the Gemtext .gmi file to decide whether anything of the content has changed or not.

Once your capsule reaches a certain size, it can become annoying to re-generate everything if you only want to preview the HTML or Markdown output of one single content file. The following will add a filter to only generate the files matching a regular expression:

by Lorenzo Bettini

http://www.lorenzobettini.it

http://www.gnu.org/software/src-highlite -->

The Git support has been completely rewritten. It's now more reliable and faster too. Have a look at the README for more information.

The htmlextras folder now contains all extra files required for the HTML output format such as cascading style sheet (CSS) files and web fonts.

It's now possible to define sub-sections within a Gemtexter capsule. For the HTML output, each sub-section can use its own CSS and web font definitions. E.g.:

The foo.zone main site

The notes sub-section (with different fonts)

Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made.

Overall I think it's a pretty solid 1.1.0 release without anything groundbreaking (therefore no major version jump). But I am happy about it.

E-Mail your comments to paul@nospam.buetow.org :-)

Other related posts are:

2021-04-24 Welcome to the Geminispace

2021-06-05 Gemtexter - One Bash script to rule it all

2022-08-27 Gemtexter 1.1.0 - Let's Gemtext again (You are currently reading this)

2023-03-25 Gemtexter 2.0.0 - Let's Gemtext again²

2023-07-21 Gemtexter 2.1.0 - Let's Gemtext again³

2024-10-02 Gemtexter 3.0.0 - Let's Gemtext again⁴

Back to the main site

        </div>

    </content>

</entry>

<entry>

    <title>Let's Encrypt with OpenBSD and Rex</title>

    <link href="gemini://foo.zone/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi" />

    <id>gemini://foo.zone/gemfeed/2022-07-30-lets-encrypt-with-openbsd-and-rex.gmi</id>

    <updated>2022-07-30T12:14:31+01:00</updated>

    <author>

        <name>Paul Buetow aka snonux</name>

        <email>paul@dev.buetow.org</email>

    </author>

    <summary>I was amazed at how easy it is to automatically generate and update Let's Encrypt certificates with OpenBSD.</summary>

    <content type="xhtml">

        <div xmlns="http://www.w3.org/1999/xhtml">

            <h1 style='display: inline' id='let-s-encrypt-with-openbsd-and-rex'>Let&#39;s Encrypt with OpenBSD and Rex</h1><br />

Published at 2022-07-30T12:14:31+01:00

I was amazed at how easy it is to automatically generate and update Let's Encrypt certificates with OpenBSD.

                                           /    _    \

The Hebern Machine \ ." ". /

                              ___            /     \

                          ..""   ""..       |   O   |

                         /           \      |       |

                        /             \     |       |

                      ---------------------------------

                    _/  o     (O)     o   _            |

                  _/                    ." ".          |

                I/    _________________/     \         |

              _/I   ."                        |        |

      =====  /  I  /                         /         |

 =====  | | |   \ |       _________________."          |

=> | | | | | / \ / ||__|| __ |

| | | | | | | \ "._." / o o \ ." ". |

| --| --| -| / \ _/ / \ |

____| \ ______ | / | | |

           --------      ---       /        |        | |

          ( )        (O)          /          \      /  |

           -----------------------            ".__."   |

           _|__________________________________________|_

          /                                              \

         /________________________________________________\

                             ASCII Art by John Savard

Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security (TLS) encryption at no charge. It is the world's largest certificate authority, used by more than 265 million websites, with the goal of all websites being secure and using HTTPS.

Source: Wikipedia

In short, it gives away TLS certificates for your website - for free! The catch is, that the certificates are only valid for three months. So it is better to automate certificate generation and renewals.

acme-client is the default Automatic Certifcate Management Environment (ACME) client on OpenBSD and part of the OpenBSD base system.

When invoked, the client first checks whether certificates actually require to be generated.

Oversimplified, the following steps are undertaken by acme-client for generating a new certificate:

There is some (but easy) configuration required to make that all work on OpenBSD.

This is how my /etc/acme-client.conf looks like (I copied a template from /etc/examples/acme-client.conf to /etc/acme-client.conf and added my domains to the bottom:

$OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $

authority letsencrypt {

api url "https://acme-v02.api.letsencrypt.org/directory"

account key "/etc/acme/letsencrypt-privkey.pem"

}

authority letsencrypt-staging {

api url "https://acme-staging-v02.api.letsencrypt.org/directory"

account key "/etc/acme/letsencrypt-staging-privkey.pem"

}

authority buypass {

api url "https://api.buypass.com/acme/directory"

account key "/etc/acme/buypass-privkey.pem"

contact "mailto:me@example.com"

}

authority buypass-test {

api url "https://api.test4.buypass.no/acme/directory"

account key "/etc/acme/buypass-test-privkey.pem"

contact "mailto:me@example.com"

}

domain buetow.org {

alternative names { www.buetow.org paul.buetow.org }

domain key "/etc/ssl/private/buetow.org.key"

domain full chain certificate "/etc/ssl/buetow.org.fullchain.pem"

sign with letsencrypt

}

domain dtail.dev {

alternative names { www.dtail.dev }

domain key "/etc/ssl/private/dtail.dev.key"

domain full chain certificate "/etc/ssl/dtail.dev.fullchain.pem"

sign with letsencrypt

}

domain foo.zone {

alternative names { www.foo.zone }

domain key "/etc/ssl/private/foo.zone.key"

domain full chain certificate "/etc/ssl/foo.zone.fullchain.pem"

sign with letsencrypt

}

domain irregular.ninja {

alternative names { www.irregular.ninja }

domain key "/etc/ssl/private/irregular.ninja.key"

domain full chain certificate "/etc/ssl/irregular.ninja.fullchain.pem"

sign with letsencrypt

}

domain snonux.land {

alternative names { www.snonux.land }

domain key "/etc/ssl/private/snonux.land.key"

domain full chain certificate "/etc/ssl/snonux.land.fullchain.pem"

sign with letsencrypt

}

For ACME to work, you will need to configure the HTTP daemon so that the "special" ACME requests from Let's Encrypt are served correctly. I am using the standard OpenBSD httpd here. These are the snippets I use for the foo.zone host in /etc/httpd.conf (of course, you need a similar setup for all other hosts as well):

server "foo.zone" {

listen on * port 80

location "/.well-known/acme-challenge/*" {

root "/acme"

request strip 2

}

location * {

block return 302 "https://$HTTP_HOST$REQUEST_URI"

}

}

server "foo.zone" {

listen on * tls port 443

tls {

certificate "/etc/ssl/foo.zone.fullchain.pem"

key "/etc/ssl/private/foo.zone.key"

}

location * {

root "/htdocs/gemtexter/foo.zone"

directory auto index

}

}

As you see, plain HTTP only serves the ACME challenge path. Otherwise, it redirects the requests to TLS. The TLS section then attempts to use the Let's Encrypt certificates.

It is worth noticing that httpd will start without the certificates being present. This will cause a certificate error when you try to reach the HTTPS endpoint, but it helps to bootstrap Let's Encrypt. As you saw in the config snippet above, Let's Encrypt only requests the plain HTTP endpoint for the verification process, so HTTPS doesn't need to be operational yet at this stage. But once the certificates are generated, you will have to reload or restart httpd to use any new certificate.

You could now run doas acme-client foo.zone to generate the certificate or to renew it. Or you could automate it with CRON.

I have created a script /usr/local/bin/acme.sh for that for all of my domains:

!/bin/sh

function handle_cert {

host=$1

# Create symlink, so that relayd also can read it.

crt_path=/etc/ssl/$host

if [ -e $crt_path.crt ]; then

    rm $crt_path.crt

fi

ln -s $crt_path.fullchain.pem $crt_path.crt

# Requesting and renewing certificate.

/usr/sbin/acme-client -v $host

}

has_update=no

handle_cert www.buetow.org

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.paul.buetow.org

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.tmp.buetow.org

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.dtail.dev

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.foo.zone

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.irregular.ninja

if [ $? -eq 0 ]; then

has_update=yes

fi

handle_cert www.snonux.land

if [ $? -eq 0 ]; then

has_update=yes

fi

Pick up the new certs.

if [ $has_update = yes ]; then

/usr/sbin/rcctl reload httpd

/usr/sbin/rcctl reload relayd

/usr/sbin/rcctl restart smtpd

fi

And added the following line to /etc/daily.local to run the script once daily so that certificates will be renewed fully automatically:

/usr/local/bin/acme.sh

I am receiving a daily output via E-Mail like this now:

Running daily.local:

acme-client: /etc/ssl/buetow.org.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/paul.buetow.org.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/tmp.buetow.org.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/dtail.dev.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/foo.zone.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/irregular.ninja.fullchain.pem: certificate valid: 80 days left

acme-client: /etc/ssl/snonux.land.fullchain.pem: certificate valid: 79 days left

Besides httpd, relayd (mainly for Gemini) and smtpd (for mail, of course) also use TLS certificates. And as you can see in acme.sh, the services are reloaded or restarted (smtpd doesn't support reload) whenever a certificate is generated or updated.

I didn't write all these configuration files by hand. As a matter of fact, everything is automated with the Rex configuration management system.

https://www.rexify.org

At the top of the Rexfile I define all my hosts:

our @acme_hosts = qw/buetow.org paul.buetow.org tmp.buetow.org dtail.dev foo.zone irregular.ninja snonux.land/;

ACME will be installed into the frontend group of hosts. Here, blowfish is the primary, and twofish is the secondary OpenBSD box.

group frontends => 'blowfish.buetow.org', 'twofish.buetow.org';

This is my Rex task for the general ACME configuration:

desc 'Configure ACME client';

task 'acme', group => 'frontends',

sub {

file &#39;/etc/acme-client.conf&#39;,

  content =&gt; template(&#39;./etc/acme-client.conf.tpl&#39;,

    acme_hosts =&gt; \@acme_hosts,

    is_primary =&gt; $is_primary),

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;;

file &#39;/usr/local/bin/acme.sh&#39;,

  content =&gt; template(&#39;./scripts/acme.sh.tpl&#39;,

    acme_hosts =&gt; \@acme_hosts,

    is_primary =&gt; $is_primary),

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;744&#39;;

file &#39;/etc/daily.local&#39;,

  ensure =&gt; &#39;present&#39;,

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;;

append_if_no_such_line &#39;/etc/daily.local&#39;, &#39;/usr/local/bin/acme.sh&#39;;

};

And there is also a Rex task just to run the ACME script remotely:

desc 'Invoke ACME client';

task 'acme_invoke', group => 'frontends',

sub {

say run &#39;/usr/local/bin/acme.sh&#39;;

};

Furthermore, this snippet (also at the top of the Rexfile) helps to determine whether the current server is the primary server (all hosts will be without the www. prefix) or the secondary server (all hosts will be with the www. prefix):

Bootstrapping the FQDN based on the server IP as the hostname and domain

facts aren't set yet due to the myname file in the first place.

our $fqdns = sub {

my $ipv4 = shift;

return 'blowfish.buetow.org' if $ipv4 eq '23.88.35.144';

return 'twofish.buetow.org' if $ipv4 eq '108.160.134.135';

Rex::Logger::info("Unable to determine hostname for $ipv4", 'error');

return 'HOSTNAME-UNKNOWN.buetow.org';

};

To determine whether the server is the primary or the secondary.

our $is_primary = sub {

my $ipv4 = shift;

$fqdns->($ipv4) eq 'blowfish.buetow.org';

};

The following is the acme-client.conf.tpl Rex template file used for the automation. You see that the www. prefix isn't sent for the primary server. E.g. foo.zone will be served by the primary server (in my case, a server located in Germany) and www.foo.zone by the secondary server (in my case, a server located in Japan):

$OpenBSD: acme-client.conf,v 1.4 2020/09/17 09:13:06 florian Exp $

authority letsencrypt {

api url "https://acme-v02.api.letsencrypt.org/directory"

account key "/etc/acme/letsencrypt-privkey.pem"

}

authority letsencrypt-staging {

api url "https://acme-staging-v02.api.letsencrypt.org/directory"

account key "/etc/acme/letsencrypt-staging-privkey.pem"

}

authority buypass {

api url "https://api.buypass.com/acme/directory"

account key "/etc/acme/buypass-privkey.pem"

contact "mailto:me@example.com"

}

authority buypass-test {

api url "https://api.test4.buypass.no/acme/directory"

account key "/etc/acme/buypass-test-privkey.pem"

contact "mailto:me@example.com"

}

<%

our $primary = $is_primary->($vio0_ip);

our $prefix = $primary ? '' : 'www.';

%>

<% for my $host (@$acme_hosts) { %>

domain <%= $prefix.$host %> {

domain key "/etc/ssl/private/&lt;%= $prefix.$host %&gt;.key"

domain full chain certificate "/etc/ssl/&lt;%= $prefix.$host %&gt;.fullchain.pem"

sign with letsencrypt

}

<% } %>

And this is the acme.sh.tpl:

!/bin/sh

<%

our $primary = $is_primary->($vio0_ip);

our $prefix = $primary ? '' : 'www.';

-%>

function handle_cert {

host=$1

# Create symlink, so that relayd also can read it.

crt_path=/etc/ssl/$host

if [ -e $crt_path.crt ]; then

    rm $crt_path.crt

fi

ln -s $crt_path.fullchain.pem $crt_path.crt

# Requesting and renewing certificate.

/usr/sbin/acme-client -v $host

}

has_update=no

<% for my $host (@$acme_hosts) { -%>

handle_cert <%= $prefix.$host %>

if [ $? -eq 0 ]; then

has_update=yes

fi

<% } -%>

Pick up the new certs.

if [ $has_update = yes ]; then

/usr/sbin/rcctl reload httpd

/usr/sbin/rcctl reload relayd

/usr/sbin/rcctl restart smtpd

fi

These are the Rex tasks setting up httpd, relayd and smtpd services:

desc 'Setup httpd';

task 'httpd', group => 'frontends',

sub {

append_if_no_such_line &#39;/etc/rc.conf.local&#39;, &#39;httpd_flags=&#39;;

file &#39;/etc/httpd.conf&#39;,

  content =&gt; template(&#39;./etc/httpd.conf.tpl&#39;,

    acme_hosts =&gt; \@acme_hosts,

    is_primary =&gt; $is_primary),

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;,

  on_change =&gt; sub { service &#39;httpd&#39; =&gt; &#39;restart&#39; };

service &#39;httpd&#39;, ensure =&gt; &#39;started&#39;;

};

desc 'Setup relayd';

task 'relayd', group => 'frontends',

sub {

append_if_no_such_line &#39;/etc/rc.conf.local&#39;, &#39;relayd_flags=&#39;;

file &#39;/etc/relayd.conf&#39;,

  content =&gt; template(&#39;./etc/relayd.conf.tpl&#39;,

    ipv6address =&gt; $ipv6address,

    is_primary =&gt; $is_primary),

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;600&#39;,

  on_change =&gt; sub { service &#39;relayd&#39; =&gt; &#39;restart&#39; };

service &#39;relayd&#39;, ensure =&gt; &#39;started&#39;;

};

desc 'Setup OpenSMTPD';

task 'smtpd', group => 'frontends',

sub {

Rex::Logger::info(&#39;Dealing with mail aliases&#39;);

file &#39;/etc/mail/aliases&#39;,

  source =&gt; &#39;./etc/mail/aliases&#39;,

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;,

  on_change =&gt; sub { say run &#39;newaliases&#39; };

Rex::Logger::info(&#39;Dealing with mail virtual domains&#39;);

file &#39;/etc/mail/virtualdomains&#39;,

  source =&gt; &#39;./etc/mail/virtualdomains&#39;,

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;,

  on_change =&gt; sub { service &#39;smtpd&#39; =&gt; &#39;restart&#39; };

Rex::Logger::info(&#39;Dealing with mail virtual users&#39;);

file &#39;/etc/mail/virtualusers&#39;,

  source =&gt; &#39;./etc/mail/virtualusers&#39;,

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;,

  on_change =&gt; sub { service &#39;smtpd&#39; =&gt; &#39;restart&#39; };

Rex::Logger::info(&#39;Dealing with smtpd.conf&#39;);

file &#39;/etc/mail/smtpd.conf&#39;,

  content =&gt; template(&#39;./etc/mail/smtpd.conf.tpl&#39;,

    is_primary =&gt; $is_primary),

  owner =&gt; &#39;root&#39;,

  group =&gt; &#39;wheel&#39;,

  mode =&gt; &#39;644&#39;,

  on_change =&gt; sub { service &#39;smtpd&#39; =&gt; &#39;restart&#39; };

service &#39;smtpd&#39;, ensure =&gt; &#39;started&#39;;

};

This is the httpd.conf.tpl:

<%

our $primary = $is_primary->($vio0_ip);

our $prefix = $primary ? '' : 'www.';

%>

Plain HTTP for ACME and HTTPS redirect

<% for my $host (@$acme_hosts) { %>

server "<%= $prefix.$host %>" {

listen on * port 80

location "/.well-known/acme-challenge/*" {

root "/acme"

request strip 2

}

location * {

block return 302 "https://$HTTP_HOST$REQUEST_URI"

}

}

<% } %>

Gemtexter hosts

<% for my $host (qw/foo.zone snonux.land/) { %>

server "<%= $prefix.$host %>" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix.$host %&gt;.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix.$host %&gt;.key"

}

location * {

root "/htdocs/gemtexter/&lt;%= $host %&gt;"

directory auto index

}

}

<% } %>

DTail special host

server "<%= $prefix %>dtail.dev" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix %&gt;dtail.dev.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix %&gt;dtail.dev.key"

}

location * {

block return 302 "https://github.dtail.dev$REQUEST_URI"

}

}

Irregular Ninja special host

server "<%= $prefix %>irregular.ninja" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix %&gt;irregular.ninja.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix %&gt;irregular.ninja.key"

}

location * {

root "/htdocs/irregular.ninja"

directory auto index

}

}

buetow.org special host.

server "<%= $prefix %>buetow.org" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix %&gt;buetow.org.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix %&gt;buetow.org.key"

}

block return 302 "https://paul.buetow.org"

}

server "<%= $prefix %>paul.buetow.org" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix %&gt;paul.buetow.org.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix %&gt;paul.buetow.org.key"

}

block return 302 "https://foo.zone/contact-information.html"

}

server "<%= $prefix %>tmp.buetow.org" {

listen on * tls port 443

tls {

certificate "/etc/ssl/&lt;%= $prefix %&gt;tmp.buetow.org.fullchain.pem"

key "/etc/ssl/private/&lt;%= $prefix %&gt;tmp.buetow.org.key"

}

root "/htdocs/buetow.org/tmp"

directory auto index

}

and this the relayd.conf.tpl:

<%

our $primary = $is_primary->($vio0_ip);

our $prefix = $primary ? '' : 'www.';

%>

log connection

tcp protocol "gemini" {

tls keypair &lt;%= $prefix %&gt;foo.zone

tls keypair &lt;%= $prefix %&gt;buetow.org

}

relay "gemini4" {

listen on &lt;%= $vio0_ip %&gt; port 1965 tls

protocol "gemini"

forward to 127.0.0.1 port 11965

}

relay "gemini6" {

listen on &lt;%= $ipv6address-&gt;($hostname) %&gt; port 1965 tls

protocol "gemini"

forward to 127.0.0.1 port 11965

}

And last but not least, this is the smtpd.conf.tpl:

<%

our $primary = $is_primary->($vio0_ip);

our $prefix = $primary ? '' : 'www.';

%>

pki "buetow_org_tls" cert "/etc/ssl/<%= $prefix %>buetow.org.fullchain.pem"

pki "buetow_org_tls" key "/etc/ssl/private/<%= $prefix %>buetow.org.key"

table aliases file:/etc/mail/aliases

table virtualdomains file:/etc/mail/virtualdomains

table virtualusers file:/etc/mail/virtualusers

listen on socket

listen on all tls pki "buetow_org_tls" hostname "<%= $prefix %>buetow.org"

listen on all

action localmail mbox alias <aliases>

action receive mbox virtual <virtualusers>

action outbound relay

match from any for domain <virtualdomains> action receive

match from local for local action localmail

match from local for any action outbound

For the complete Rexfile example and all the templates, please look at the Git repository:

https://codeberg.org/snonux/rexfiles

Besides ACME, other things, such as DNS servers, are also rexified. The following command will run all the Rex tasks and configure everything on my frontend machines automatically:

rex commons

The commons is a group of tasks I specified which combines a set of common tasks I always want to execute on all frontend machines. This also includes the ACME tasks mentioned in this article!

ACME and Let's Encrypt greatly help reduce recurring manual maintenance work (creating and renewing certificates). Furthermore, all the certificates are free of cost! I love to use OpenBSD and Rex to automate all of this.

OpenBSD suits perfectly here as all the tools are already part of the base installation. But I like underdogs. Rex is not as powerful and popular as other configuration management systems (e.g. Puppet, Chef, SALT or even Ansible). It is more of an underdog, and the community is small.

Why re-inventing the wheel? I love that a Rexfile is just a Perl DSL. Also, OpenBSD comes with Perl in the base system. So no new programming language had to be added to my mix for the configuration management system. Also, the acme.sh shell script is not a Bash but a standard Bourne shell script, so I didn't have to install an additional shell as OpenBSD does not come with the Bash pre-installed.

E-Mail your comments to paul@nospam.buetow.org :-)

Other *BSD related posts are:

2016-04-09 Jails and ZFS with Puppet on FreeBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex (You are currently reading this)

2022-10-30 Installing DTail on OpenBSD

2024-01-13 One reason why I love OpenBSD

2024-04-01 KISS high-availability with OpenBSD

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

Back to the main site

        </div>

    </content>

</entry>

Proxy Information
Original URL
gemini://standby.foo.zone/gemfeed/atom.xml
Status Code
Success (20)
Meta
text/xml
Capsule Response Time
589.049363 milliseconds
Gemini-to-HTML Time
129.79112 milliseconds

This content has been proxied by September (ba2dc).