The aim of this document is to describe the steps I have taken to build my home server. Now this document is hosted on my server which could be accessed through xavierb.cc domain. This is for the interest of anybody who wants to build her home server and gain some kind of independence. This is also for me, to remember. It is some kind of memorandum.
The license of the document is Creative Commons Attribution-ShareAlike 4.0 (CC-BY-SA 4.0).
If you want to send some feedback, please do it in the following mailing list.
Some way this document is a tribute to the document "Como monté mi servidor" by Daniel Clemente Laboreo (2004), which has been an inspiration all these years.
=> This document available via xavierb.cc domain | Creative Commons Attribution-ShareAlike 4.0 (CC-BY-SA 4.0) | Mailing list for comments | "Como monté mi servidor". Daniel Clemente Laboreo (2004)
Thanks for reading it.
My name is Xavier Bordoy. I am a teacher of Mathematics in a country belonging to European continent. I like very much free software and knowledge, and I value privacy.
In this sense, I do not have social networks, except federated ones, nor communication systems which I do not need and which they do not respect privacy of persons (e.g. Whatsapp): if someone wants to tell me something, she could call me. I also do not need infinite discussion threads or see which meal has been eaten by a friend of mine. For near environments, I use Briar which is peer-to-peer. In general, I do not use technology silos: platforms which isolate user data and cannot be accessed outside the platform.
=> Mastodon personal account | Briar project
I have contributed to some open-source projects, as a user, not as a developer. I do not link here to my profiles in those projects (remember, I prioritize privacy) but "somenxavier" is one of my favorite usernames if you want to investigate.
Why do I need a home server? Basically for two reasons:
I had three objectives in mind:
Ideally, ARM machines satisfy all these requirements but they have specific installation instruction for every machine: RaspberryPi, Olinuxino, Cubieboard, etc. (archlinuxarm has different instructions for each each machine; or the specific instructions for RP Zero 2W which are, surely, not the same for other ARM machines) and most of them need additional modules for reasonable harddisk space, more RAM, etc. Moreover, most of them have firmware blobs which I wanted to avoid. So I chose x86-64 machines (other architectures have less tested operating systems).
=> Olinuxino | ArchLinuxARM | Instructions for installing OpenBSD in RP Zero 2W | Extending RaspberryPi to use Harddisk
Originally, I wanted to breathe new life into old machines to avoid electronic waste. I had two machines with a reasonable amount of RAM:
The drawbacks were:
After that, I realized that I have a fanless machine which does not burn at touch: this is my adblocking machine. It is a Chuwi Herobox with 8 GiB of RAM. I have installed FreeBSD with AdGuardHome on it.
=> Chuwi Herobox | FreeBSD | AdGuardHome
I tested the temperature of this machine:
# sysctl -a | grep tempera hw.acpi.thermal.tz0.temperature: 49.1C dev.cpu.3.temperature: 55.0C dev.cpu.2.temperature: 54.0C dev.cpu.1.temperature: 54.0C dev.cpu.0.temperature: 54.0C
For all these reasons, I bought a new machine: a Chuwi Herobox for $139. It is fanless and it consumes just 6.1 Watts. So I expected to spend a few dollars on the machine's power consumption: the mean cost of Watts per hour is €0.18944. So, the cost would be 0.0061*24*0.18944*365 = €10.12. But I planned to shut down the computer during storms or whenever I decide. Thus, that calculated consumption is really a maximum.
=> How to calculate power cosumption according to Daniel Clemente | The mean cost of Watts * h
I wanted free operating system. Additionally, I wanted an operating system that was at least minimally proven. That is, not an operating system with almost no users. That left me with *BSD and GNU/Linux.
I tried OpenBSD for a while and there were a few things I did not like:
=> "RSS or Atom syndication for security advisories?"
By all of these, I decided to use GNU/Linux, specifically Artix with OpenRC: I have Artix installed on my laptop and I feel comfortable. I would have preferred to use Alpine for its simplicity and low-memory usage, but 1) there were some errors (e.g. I cannot enable lm_sensord: rc-update add lm_sensor{d} does not work) and 2) the documentation is not so extensive as in arch. The cons is that Artix is bleeding-edge so it ocasionally breaks.
=> Artix distribution (it has many init variants: dinit, openrc, runit and s6) | Alpine
On one hand, I avoided Debian and all derivative distributions because they are slow and apt-get does not make the job as pacman does: I have experienced a cyclical dependencies which broke the system. On the other hand, systemd is horrible: it is not PID 1, it is simply everything else. OpenRC or runit are good enough for me, mature and user-friendly (not know-it-all friendly).
=> Debian | systemd is horrible | OpenRC | runit
I considered other distributions like voidlinux but I dismissed because it would have implied to learn another package manager.
=> voidlinux
The installation was straightforward with no complications. My partitions are:
Device Start End Sectors Size Type Punt de muntatge /dev/sda1 2048 3147775 3145728 1.5G EFI System /boot/efi /dev/sda2 3147776 45090815 41943040 20G Linux swap /dev/sda3 45090816 500117503 455026688 217G Linux filesystem /
I chose ext4 instead of btrfs, basically, for its stability.
Several days after the installation, I tuned my system in order to have essentially more secure operating system. I also added more features, but are minor ones. Basically, I followed the very well-written documentation of Archlinux: "Security", "General recommendations" and "Improving performance". Also, I consulted some pages of Gentoo wiki. For specific questions, I searched in Stack Overflow and related sites. Now I use this site less than before because they take advantage to knowledge of users to make money.
=> "Security" in Archlinux wiki | "General recommendations" in Archlinux wiki | "Improving performance" in Archlinux wiki | Security handbook in Gentoo wiki | Gentoo wiki | Stack Overflow | Stackoverflow signs deal with OpenAI to supply data to its models. Techcrunch. (2024)
To avoid SSD degradation, I put the TRIM parameter in the fstab (discard as a parameter of / partition) and set mq-deadline as default elevator (elevator=md-deadline in /etc/default/grub kernel parameters).
I connected machine to my router via ethernet and set the IP as static:
# ln -s /etc/init.d/net.lo /etc/init.d/net.eth0 # rc-update add net.eth0 default * service net.eth0 added to runlevel default # cat /etc/conf.d/net # For static IP using netmask notation config_eth0="192.168.0.7 netmask 255.255.255.0" routes_eth0="default via 192.168.0.1" dns_servers_eth0="212.166.132.114 212.166.132.96 1.1.1.1"
Thus, 192.168.0.7 was the LAN static IP of my machine. To avoid having to remember, I put that in my laptop /etc/hosts. I have to say that my router has IP 198.168.0.1. The DNS servers are the default of my broadband provider.
Finally, I bought a SSD. It has 480GB of capacity and just €40.99. The purpose of this harddisk is to store my web server files. I formatted as btrfs and put some securing parameters:
$ cat /etc/fstab # Static information about the filesystems. # See fstab(5) for details. ## /dev/sda3 UUID=ff1189d4-46b7-4ec9-8ac8-282f9ab39c9d / ext4 rw,relatime,discard 0 1 # /dev/sda1 UUID=7635-C0C4 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2 # /dev/sda2 UUID=0a21af02-d135-42c9-aa32-cc261cfd848d none swap defaults 0 0 # /dev/sdb1 UUID=7bcaf503-b679-4180-9a98-52bb3867d54d /srv/http/ btrfs rw,relatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/,nosuid,nodev,noexec 0 0
In this time, I chose btrfs and not ext4 because I planned to make snapshots of web files in some eventual future.
Having a web files in separate partition is a good security mesure.
I wanted to use NTP for syncing the clock of my machine. But I planned to not having machine turned on 24/7. So, I chose chrony package and configuring appropiately:
server ptbnts1.ptb.be iburst nts iburst server nts.sth1.ntp.se nts iburst server 0.europe.pool.ntp.org offline iburst server 1.europe.pool.ntp.org offline iburst server 3.europe.pool.ntp.org iburst pool 2.europe.pool.ntp.org iburst
=> NTP at ArchWiki | chrony at ArchWiki
Initially, my plan was to use the linux-libre and linux-libre-firmware of Parabola GNU/Linux repository because they have no binary blobs (I discarded to use the AUR package because it would imply hours of compilation). But I realized that they had two cons:
=> Parabola GNU/Linux | AUR package of linux-libre
So I decided to switch to ordinary kernel: linux. In some web pages, there is said that LTS is better, for more stability but I think it is better to have the last features because the release model of kernel makes linux stable: mainline turns into stable and, in somepoint, if it is desired, to longterm.
=> release model of linux kernel
My firewall rules were pretty simple:
With these rules:
At first, I used ufw because it is simple. However, I realized that some computers wanted to access to some strange or unused paths. These are examples existing on my log web server:
178.128.120.151 - - [19/Jul/2023:20:27:24 +0200] "\x16\x03\x01\x00{\x01\x00\x00w\x03\x03~\xE1\x87q\xA1\x09G\x94\x9E\x1A2\x85\xC6\x11\xE8k\x0F\x98w\x19\xF3\xEDh\x10E\xF7r\xE6\x8D\xB2\xFE\xFD\x00\x00\x1A\xC0/\xC0+\xC0\x11\xC0\x07\xC0\x13\xC0\x09\xC0\x14\xC0" 400 150 "-" "-" 87.121.47.240 - - [21/Jul/2023:15:02:47 +0200] "POST /boaform/admin/formLogin HTTP/1.1" 301 162 "http://84.120.157.219:80/admin/login.asp" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0" 45.128.232.176 - - [21/Jul/2023:19:50:34 +0200] "CONNECT google.com:443 HTTP/1.1" 400 150 "-" "-"
Therefore, I wrote a script to block all known malicious hosts:
$ cat banip/script.sh #!/bin/bash # The file ban.txt is `ln -s` from file which contains IPs to block. input="ban.txt" while IFS= read -r line do echo $line; ufw insert 1 deny from "$line" comment "ban IP from list"; done < "$input"
and apply it to some lists: prx-brutes (~6K IPs) and pf-badhosts (~33K IPs).
=> prx-brutes | pf-badhosts
Running the script for the second list took a lot of time: 16 hours to just complete 76% of file. In this point, I realized that ufw is slow (it is really a python interface of iptables) and I needed to use low-level tools: iptables is good but nftables is the next tool. So I migrated from ufw to nftables: I followed this tutorial:
=> "Moving from IPtables to nftables" tutorial
# iptables-save > save.txt # pacman -S nftables nftables-openrc # iptables-restore-translate -f save.txt > ruleset.nft # pacman -Rs ufw-openrc ufw # rc-update add nftables´ # nft -f ruleset.nft # nft list ruleset > migrated-rules.nft # rc-service nftables save # rc-service nftables start # pacman -S iptables-nft # reboot
After some tries, I simplified my rules:
table inet my_table { set badips { type ipv4_addr flags interval auto-merge elements = { 0.0.0.0/8, 1.0.147.18, 1.0.171.2, 1.0.227.12, ... } } chain lanconn { tcp dport 22 accept comment "[nftables] Allow SSH from LAN" icmp type echo-request accept comment "[nftables] Allow unlimited pings from LAN" } chain my_input { type filter hook input priority filter; policy drop; iifname "lo" accept comment "Accept anything from lo interface" ct state vmap { invalid : drop, established : accept, related : accept } ip saddr 192.168.0.0/24 jump lanconn ip saddr @badips drop comment "[nftables] Block ban IP" tcp dport { 80, 8443 } ct state new limit rate 10/second burst 5 packets log prefix "[nftables] HTTP(S) traffic" accept comment "[nftables] Allow HTTP/HTTPS traffic but limit them to 10 new connections per second" tcp dport 1965 ct state new limit rate 10/second burst 5 packets log prefix "[nftables] Gemini: new gemini connection" accept comment "[nftables] Accept Gemini project protocol but limit to 10 connections per second" } chain my_forward { type filter hook forward priority filter; policy drop; } chain my_output { type filter hook output priority filter; policy accept; } }
The banips set is an idea from stackoverflow. Anything in this set is blocked. So no annoyances.
=> Stackoverflow "How to avoid insert repeated rules in nftables"
I wrote a script to ban IPs:
#!/bin/bash banips() { input=$1 echo "Starting banning IPs..." while IFS= read -r line do echo $line; nft add element inet my_table badips { $line } ; done < "$input" } banips $1
It really put an IP in badips set. I ran this script for several list of bad IPs. I get the majority of this files from Firehol lists. I also added the list of Google bots (579 lines) because they did not respect the robots.txt: I had an scan from google IP and my robots said "all bots have no access" except archive.org bot.
=> prx-brutes.txt (6.256 lines): done in 1m 29s | pf-badhost (33.911 lines; only IPv4): done in 71m 38s. | firehol level1 (2.074 lines) | nginx-badbot-blocker (804 lines) | prx-brutes.txt (6.064 lines) | pf-badhost (33.911 lines) | Botsfrom Google (579 lines) | Bots from Google. Special Crawlers | More bots from Google | Archive Project | blocklist.de (20.272 lines) | blocklist.de bots (233 lines) | firehol level 2 (15.713 lines) | firehol level 3 (17.183 lines) | firehol webserver (2237 lines)
Just for soundness, my robots file is:
# robots.txt file # Archive bot User-agent: archive.org_bot Disallow: /priv/ # Any other bot: disallow User-agent: * Disallow: /
Beside that, I manually added some IP in the badips list using "nft add element inet my_table badips { }" after seeing my web server log every day.
My efforts went so far that I even made a script to periodically download some of the above lists and block them (and invoked it via cron). I even used fail2ban to block them dynamically. I had to tweak fail2ban configuration in order that my rules had more priority than fail2ban ones: "chain_priority = 2" in /etc/fail2ban/action.d/nftables.conf (it was -1).
This way, my machine was reasonably protected against malicious attacks from bad people. Despite all this, the bot attacks continued and weekly, when I reviewed the logs, entries I didn't like kept appearing.
I was tired about that, and I realized I need something different (see next sections).
I chose nginx because it is reasonably secure and used by more people. My configuration, basically, redirected HTTP traffic to HTTPS.
However, I realized ordinary people, including myself, were not thrilled about the idea of typing https://xavierb.nsupdate.info:8443/.... So I deciced to buy my own domain: xavierb.cc was a good one (my name and some kind of Creative Common reminder because I wanted to release all contents under some Creative Commons license). Later I was tired of keeping track of whether the Let's Encrypt certificate was renewed or not correctly. Thus I decided to buy also my own SSL certificate. For these two concepts, I used namecheap.
With all of these, the configuration of my nginx server is:
# cat /etc/nginx/nginx.conf #user http; worker_processes 1; error_log /var/log/nginx/error.log; #pid logs/nginx.pid; events { worker_connections 30; } http { include mime.types; default_type application/octet-stream; # Include some settings include /etc/nginx/conf.d/comuns.conf; # root root /srv/http; # HTTPS servers: # Redirecting HTTP to HTTPS permanently server { listen 80; server_name xavierb.cc; limit_req zone=primera burst=5; limit_conn segona 3; if ($scheme != "https") { return 301 https://$host:8443$request_uri; } } # Serving xavierb.cc # Common options are stored in conf.d/servidor.conf server { server_name xavierb.cc; limit_req zone=primera burst=5; limit_conn segona 3; # SSL certificates (namecheap bought on 2024-08-18 + 5 years) http2 on; listen 8443 ssl default_server; ssl_certificate /etc/certificats/public.crt; ssl_certificate_key /etc/certificats/privat.key; ssl_session_timeout 1d; # SSL protocol ssl_protocols TLSv1.3; ssl_prefer_server_ciphers on; # include common server options include /etc/nginx/conf.d/servidor.conf; } } # cat /etc/nginx/conf.d/comuns.conf charset utf-8; keepalive_timeout 65; client_body_timeout 5s; client_header_timeout 5s; client_max_body_size 20M; client_header_buffer_size 1k; send_timeout 10; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; types_hash_max_size 4096; # compression gzip on; gzip_comp_level 3; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; gzip_min_length 500; # caching. Always prefer cached contents on clients map $sent_http_content_type $expires { default off; text/html max; text/css max; image/jpeg 1y; image/png 1y; image/x-icon max; video/webm max; application/javascript max; application/atom+xml 1h; } add_header Cache-Control "public, no-transform"; # HTTP headers (see https://ramshankar.org/blog/posts/2020/configuring-http-security-headers) # X-Frame. Fa que sempre el continguts d'un iframe sigui del mateix domini add_header X-Frame-Options "deny" always; # X-XSS-Protection add_header X-Xss-Protection "1; mode=block" always; # X-Content-Type-Options add_header X-Content-Type-Options "nosniff" always; # No robots add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; # Referrer policy add_header Referrer-Policy strict-origin-when-cross-origin always; # Permitted Cross Domain Policies add_header X-Permitted-Cross-Domain-Policies none always; # Permissions policy add_header Permissions-Policy "geolocation=(),microphone=()" always; # Content-Security Policy ## era 'default-src 'none' per a restringir-ho encara m�s; pero alguns scripts no funcionen add_header Content-Security-Policy "default-src 'self'; base-uri 'self'; style-src 'self'; img-src 'self'; font-src 'self'; media-src 'self'; object-src 'self'; frame-ancestors 'none'; form-action 'none'; script-src 'self' 'unsafe-inline' https://*.jsdelivr.net ; block-all-mixed-content" always; # HSTS with max-age: 1 month add_header Strict-Transport-Security "max-age=2592000; includeSubDomains" always; # http_limit_req, http_limit_conn modules # https://nginx.org/en/docs/http/ngx_http_limit_req_module.html # https://nginx.org/en/docs/http/ngx_http_limit_conn_module.html limit_req_zone $binary_remote_addr zone=primera:10m rate=2r/s; limit_conn_zone $binary_remote_addr zone=segona:10m; # cat /etc/nginx/conf.d/servidor.conf # https://serverfault.com/a/905922 # nobody shall be able to delete anything on this server if ($request_method = DELETE) { return 405; } location / { index index.html; # https://serverfault.com/a/905922 limit_except GET { # block does not inherit the access limitations from above deny all; } # image hijacking o image hotlink https://serverfault.com/a/907861/474070 location ~* \.(jpeg|png|gif|ico)$ { valid_referers none blocked xavierb.cc xavierb.nsupdate.info; if ($invalid_referer) { return 403; } } } # Zona privada. Necessita password location /priv { index index.html; # Pos l'autoindex autoindex on; auth_basic "Zona privada"; auth_basic_user_file /etc/nginx/contrasenyes-privades.txt; } # Zona publica. Necessita password location /pub { index index.html; auth_basic "Enter 'public' and 'public'"; auth_basic_user_file /etc/nginx/contrasenyes-publiques.txt; } error_page 497 https://$host:$server_port$request_uri; # custom error pages error_page 404 /lib/404.html; location /lib/404.html { internal; } error_page 500 502 503 504 /lib/50x.html; location /lib/50x.html { internal; }
I think all is self-explanatory in the source files comments:
My aim was to have secure system. But sometimes secure is tedious. This way, I did not want to appply a measure which would give me in future a lot of work to maintain it. So I avoided chroot and virtual machines, because I would have to upgrade periodically the system isolated instance each time nginx package had an update, for example.
These are the list of securing mesures I applied:
# ps --no-headers -Leo user | sort | uniq --count 1 chrony 1 dbus 1 ddclient 1 http 135 root 2cat /etc/security/limits.conf # # * soft nproc 200 * hard nproc 500
# System stuff ## Restricting access to kernel pointers in the proc filesystem kernel.kptr_restrict = 2 ## BPF hardening net.core.bpf_jit_harden=2 kernel.unprivileged_bpf_disabled=1 ## Links protected fs.protected_hardlinks = 1 fs.protected_symlinks = 1 ## https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=30aba6656f61ed44cba445a3c0d38b296fa9e8f5 fs.protected_regular = 1 fs.protected_fifos = 1 ## Sandboxing applications kernel.unprivileged_userns_clone = 0 ## Unpriviledged user cannot use BPF ## Only admin can use debugging system with ptrace kernel.yama.ptrace_scope=2 ## Disable kexec kernel.kexec_load_disabled = 1 ########################################33 # Network stuff ## TCP/IP stack hardening net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_rfc1337 = 1 ## Reverse path filtering net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 ## TCP Fast Open net.ipv4.tcp_fastopen = 3 ## Increase the memory dedicated to the network interfaces net.core.rmem_default = 1048576 net.core.rmem_max = 16777216 net.core.wmem_default = 1048576 net.core.wmem_max = 16777216 net.core.optmem_max = 65536 net.ipv4.tcp_rmem = 4096 1048576 2097152 net.ipv4.tcp_wmem = 4096 65536 16777216 ## Tweak the pending connection handling net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_max_tw_buckets = 2000000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_slow_start_after_idle = 0 ## Change TCP keepalive parameters net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 6 ## To disable ICMP redirect acceptance net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.all.secure_redirects = 0 net.ipv4.conf.default.secure_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv6.conf.default.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 ## TCP timestamps ## + protect against wrapping sequence numbers (at gigabit speeds) ## + round trip time calculation implemented in TCP ## - causes extra overhead and allows uptime detection by scanners like nmap ## enable @ gigabit speeds net.ipv4.tcp_timestamps = 1 ## Do not accept IP source route packets (we are not a router) net.ipv4.conf.all.accept_source_route = 0 net.ipv6.conf.all.accept_source_route = 0 ## Ignore echo broadcast requests to prevent being part of smurf attacks (default) net.ipv4.icmp_echo_ignore_broadcasts = 1 ## Ignore bogus icmp errors (default) net.ipv4.icmp_ignore_bogus_error_responses = 1
I secured some programs using apparmor. Just the critical ones: nginx, gmid (a Gemini server) and ddclient (client for updating my dynamic domain in namecheap - and previously in nsupdate). I used this guide. Personally, I missed some free (as in beer) buffer overflow prevention like openwall patch in the past in the kernel. Perhaps in the future linux developers would implement some in rust for example.
=> gmid homepage | "How to create an Apparmor profile for nginx on ubuntu 14.04" guide | openwall patch in the past in Daniel Clemente's page
For example for gmid:
# cd /etc/apparmor.d # aa-autodep gmid Writing updated profile for /usr/bin/gmid. # cat usr.bin.gmid # Last Modified: Mon Jul 17 23:17:56 2023 abi, include /usr/bin/gmid flags=(complain) { include /usr/bin/gmid mr, } # aa-complain gmid Setting /usr/bin/gmid to complain mode. # rc-service gmid restart # Mess with the program.... accessing from outside the server # aa-logprof ## for seeing what this needs # aa-enforce gmid Setting /usr/bin/gmid to enforce mode # rc-service gmid restart # apparmor_status
The essential command is aa-logrof which complains and within you can auth some kind of process, read, write, etc.
I decided to not secure sshd daemon because it is too complicated.
I needed to create the file /etc/init.d/gmid because gmid package in the AUR did not have -openrc part:
# cat gmid #!/sbin/openrc-run # https://data.gpo.zugaina.org/guru/net-misc/gmid/files/gmid.initd # Copyright 1999-2021 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 extra_commands="configtest" extra_started_commands="reload" description="Simple and secure Gemini server" description_configtest="Run gmid's internal config check." description_reload="Reload the gmid configuration without losing connections." GMID_CONFIGFILE=${GMID_CONFIGFILE:-/etc/gmid.conf} command="/usr/bin/gmid" pidfile="/var/run/gmid.pid" command_args="-c \"${GMID_CONFIGFILE}\" -P ${pidfile}" depend() { need net use dns logger netmount } start_pre() { if [ "${RC_CMD}" != "restart" ]; then configtest || return 1 fi } stop_pre() { if [ "${RC_CMD}" = "restart" ]; then configtest || return 1 fi } reload() { configtest || return 1 ebegin "Refreshing gmid's configuration" start-stop-daemon --signal SIGHUP --pidfile "${pidfile}" eend $? "Failed to reload gmid" } configtest() { ebegin "Checking gmid's configuration" ${command} -c "${GMID_CONFIGFILE}" -n eend $? "failed, please correct errors in the config file" }
With this, I could run rc-update add gmid.
The possibility of receiving a DDoS attack eventually in a public server was present at the very first time. So I also connect my machine to my router via wlan. This way, if eth0 was collapsed, I would connect via wlan0 from my home LAN.
I assign an static IP from my router via MAC filering.
One day I realized I needed a paradigm shift: a reverse proxy. A computer that would sit between my server and the internet and filter out the malicious traffic. So I opted for Cloudflare.
In my machine, I removed fail2ban because, then, I did not needed. And I configure my firewall in order to block everything as an exception of Cloudfare computers.
As a "log viewer", it was good, because it drops almost to nothing the botnet scans (no logs except just Palo Alto Network) and I had DDoS protection additionally. However, I changed my mind for two reasons:
I am not sure at all, but I think that if my attacker knew something, she could spoofed source header and send packets with my IP address (or Cloudfare address). So it is not definively secure.
So I close Cloudfare account and went back to my previous setup. But, in this case, I simplified the things: removing fail2ban and removing the badips set in my firewall rules. In this sense, my thoughts were: "if web services are enough protected, I don't care if someone accesses a malformed or missing URI. They will waste their time. Yes, I will waste my bandwith too but the impact will be minimum. If I suffer a DDoS attack, then I'll disconnect or unplug the server. I don't need that".
Now, my nftables rules are:
# nft list ruleset table inet my_table { chain lanconn { tcp dport 22 accept comment "[nftables] Allow SSH from LAN " icmp type echo-request accept comment "[nftables] Allow unlimited pings from LAN " tcp dport { 80, 1965, 8443 } log prefix "[nftables] HTTP(S)/Gemini traffic from LAN " accept comment "[nftables] Allow HTTP/Gemini traffic from LAN " } chain my_input { type filter hook input priority filter; policy drop; iifname "lo" accept comment "Accept any localhost traffic (lo interface)" ct state invalid drop comment "Drop invalid connections" ct state established,related accept comment "Accept traffic originated from us" ip saddr 192.168.0.0/24 jump lanconn tcp dport { 80, 1965, 8443 } ct state new limit rate 10/second burst 5 packets log prefix "[nftables] HTTP(S)/Gemini traffic from WAN " accept comment "[nftables] Allow HTTP(S)/Gemini traffic but limit them to 10 new connections per second " } chain my_forward { type filter hook forward priority filter; policy drop; } chain my_output { type filter hook output priority filter; policy accept; } }
These rules are pretty simple. So easy to maintain in this sense.
When I want just to have gemini, I simply remove ports 80 and 8443 in my configuration (this is what I have now). So almost no annoying bots there because gemini is relative new protocol. With proxys outside users could use web to access to my information.
=> Smolnet Portal proxy of my capsule.
One of my biggest concerns was that if someone took control my server, she wouldn't be able to get into the other machines in my home. Finally, I found a solution: isolate my LAN network with a secondary router behind my main router. Thus, I bought a secondary router (TP Link Archer AX53) for €55 (~ $58.38).
=> Asking about the concern | Isolate my LAN network with a secondary router behind my main router. Stackoverflow
This can summarize in the following schema:
The summary of all my costs are:
This is the final result of what I have:
=> final result
From left to right: my home server (halted), main router, adblocker (FreeBSD machine with adblockhome) and secondary router.
I have used vim for typing things and taskbook for managing tasks.
=> taskbook
taskbook, by default, creates a .taskbook.json file which has the configuration of taskboard. In particular, the directory in which taskboard stores tasks. I have a ln -s from project specific json file to this file. So I can have several taskboard tasks, for each project.
I have used gemgen to convert this document from markdown original text to gemtext. But then I have edited it manually.
I started the project on June 29th 2023. Although I have already minor issues pending (e.g. to organize documents on the server, or open port 80 in order to redirect from http://xavierb.cc to https://xavierb.cc:8443), I can say that the project is finished (August 22th 2024).
If I built another home server, perhaps I could force myself to use ARM or RISC-V architectures, overall for power comsumption feature. Regarding operating systems, perhaps I would try Alpine (for low-consumption on memory and harddisk space) or when Hyperbola would be available, HyperbolaBSD (for security).
=> Hyperbola project home page
The main benefit is freedom:
The cons are essentially maintaining the system: but "pacman -Syu" is enough in my case. And I waste money (energy consumption) and don't earn money (no monetization of my home server). But I don't mind. Not all in life it's just about the money.
text/gemini;lang=ca
This content has been proxied by September (3851b).