computer log
2025-01-01T00:08:41Z
<name>mio</name>
gemini://tilde.town/~mio/log/atom.xml
<id>gemini://tilde.town/~mio/log/2024-12-04-december-adventure-2024.gmi</id>
<title>
<![CDATA[December Adventure 2024]]>
</title>
<updated>2024-12-31T20:00:00Z</updated>
<author>
<name>mio</name>
</author>
<link href="gemini://tilde.town/~mio/log/2024-12-04-december-adventure-2024.gmi" rel="alternate"/>
<summary>
<![CDATA[gemini://tilde.town/~mio/log/2024-12-04-december-adventure-2024.gmi]]>
</summary>
<content>
<![CDATA[---
date: 2024-12-05T01:28:59Z
update: 2024-12-31T20:00:00Z
An event where the goal is to write some code daily for the month of December.
It should be probably noted this is not my first year participating, though it is my first year with a log entry that wasn't just a list of dates. 2022 was the first year, and activity was recorded at the time over a private shell account on a VM that the event organiser graciously hosted. Made a text-based custom map navigator that year inspired by text adventures and role-playing world building. Afterwards removed the repo of scripts made during the month and forgot to import the log.
Last year started well enough (contributed a bit to Katalogo's db schema, wrote a shell script to add clients to a Wireguard server, and a tiny minigame for a Secret Santa gift) but things got busy, stopped after the 20th. Will see what happens this year.
This entry will be updated every few days until the end of the month, or life calls and demands what remains of the year, whichever occurs sooner.
Wrote a messy shell script to install a Wireguard server on Alpine Linux and add a number of peers. It's an automated version of the instructions on the Alpine wiki. Had a VM going up for renewal and decided to migrate to another server. Wireguard was one of the applications reinstalled successfully on a VM with the script. While not difficult to set up manually, it would save some time in case of another VM migration or reimaging. Might refactor the script later.
There seems to be an oddity with wireguard-tools (1.0.20210914-r4) whereby if the same interface were reconfigured multiple times, wg-quick will insert subnet drop rules. This was possibly to revoke access for the previously assigned subnets, but something may be wrong with the syntax, which caused all traffic on the net interface to be dropped. Reinstalling the package resets the configuration and resolves the issue.
=> Configure a Wireguard interface (wg) | wg.sh
Started adapting a build script to cross-compile a mainline kernel for my Asus Chromebook C201, based on the fantastic linux-edge Alpine Linux package script by mps and a kernel config from Arch Linux ARM. It builds slowly on my small VM and failed twice so far at the package stage due to small sections that hadn't modified correctly. linux-edge is set up to build for all architectures that Alpine currently supports but my version only targets armv7.
An Alpine Linux maintainer active in the PostmarketOS community also maintains a linux-veyron kernel downstream. However, Arch Linux ARM was the first OS had booted on the C201, before there were other options like PrawnOS, so it is a bit like coming full circle in reusing the linux-armv7 kernel config.
=> linux-edge — Alpine Linux | linux-armv7 — Arch Linux ARM
Update 2024-12-08: the patch is not necessary to use a custom source path outside of ~/aports
. Passing in an APORTSDIR
environment variable will work, e.g. APORTSDIR=~/my-custom-ports abuild rootbld
. The .rootbld-repositories files would be needed for custom package cache locations to be detected. Leaving the patch up as an artifact of December Adventure.
Stapled onto abuild (Alpine Linux's package build application) support for building outside of the default ~/aports
path using the rootbld
option. Basically it allows using the rootbld like this:
cd ~/my-ports/repo/pkgname abuild rootbld
Previously it would have resulted in an error, bwrap: Can't chdir to /home/user/my-custom-ports/repo/pkgname: No such file or directory
. This is done by copying the pkgname/
directory into the build chroot before invoking bubblewrap.
The repo directory would still need a .rootbld-repositories
file for it to work, similar to the ones found in aports. Add the REPODEST
path (the output directory path set in /etc/abuild.conf
) or a URL to the custom repo(s) if building packages that depend on other packages only found there.
$mirror/$version/main $mirror/$version/community /path/to/my-ports-repodest/repo
It will remain a local modification as people are probably not supposed to build outside of the aports git repo. Attached the patch on the off chance it may be useful for others. Works with a rebuild of abuild 3.14.1-r3.
A bit of terminology and background on the previous few days along as the logs may have been a bit confusing for people unfamiliar with Alpine Linux software packaging:
rootbld
is an option in abuild that enables building a package inside a chroot, usually a small filesystem in which to run processes. Mostly helps to keep source from different build runs separate and clean up temporary files.Some people have a personal or third-party source tree where they keep modifications of existing packages (e.g. to enable experimental/unsupported features), legacy software, or packages that might not be a good fit for aports, such as packages that require specific versions of dependencies that don't co-exist with ones in the aports repos.
Mine is tiny at the moment, just a few packages from my migration to Alpine as a desktop distro and found some packages had no direct equivalents or were missing, e.g. mypaint. Some have already been added to aports. A number of packages by other maintainers have also appeared since then like tmuxinator. Initially my goal had been to gradually move them to aports if the packaging was stable and didn't have often have breaking changes, but it calls for a certain amount of time commitment to ensure they remain in good shape. It is probably better to not officially maintain if could not provide an assurance they will be upgraded in a timely manner and fix issues if they arise.
December Adventure will probably consist of me revisiting the git repo to remove obsolete or unused package scripts, then import or add new scripts for some of the regularly used software and libraries (and where a bit of coding comes in). This has a different goal, which is to ease restoration of a working setup and fallback in case of software or hardware failure, and to have better control over the pace of package upgrades. Details have yet to be worked out. For now, would like fix up my C201 kernel build script, test that it actually boots with a minimal filesystem, before rebuilding (and potentially patching) more packages.
Tried out cross-compiling to armv7 with abuild rootbld
and it worked, if a bit slowly on my single-core x86_64 VM. Combined it with the buildrepo
command from the lua-aports package to cross-compile a small number of custom packages for armv7 after a repo cleanup and upgrading packages to newer releases. Among other features, buildrepo invokes abuild to build all the packages within a repo or set of repos.
Also have a first passing build of linux-veyron from the build script. It still needs some adjustments, e.g. it installs a number of device trees (.dtb) and the C201 should only need one. Fortunately at this point it's possible to rerun abuild rootpkg
to adjust the packaged files and not have to recompile the kernel each time.
Kernel build script temporarily delayed by an issue with ibus (an input method editor) — its gschema files (xml files that enable changing settings with dconf editor) are using deprecated namespace paths and the glib schema generator utility is issuing warnings about it. Some users think the warnings are confusing as they mainly targeted towards app developers and would like their output suppressed within the schema generator. Generated a patch in an attempt to address the warnings, mainly string replacement. Touched code but didn't code anything today.
Kernel build sidetracked, this time by a search for an answer to a question one of the Alpine developers had on whether Qt6 QML (version 6.8.1) was broken on armhf. Haven't got an armhf device with Alpine installed and internet-connected, so the next option was an emulator/Qemu VM or to check it on the aports CI runners (aarch64 servers running in 32-bit). A slight complication is the application needs to be run under a display server like X or Weston, but it's possible to run it with Xvfb simulating X server.
Stuck the test inside an APKBUILD so it could be put through CI. It's very simple and probably doesn't count as coding, but serves the purpose.
check() { mkdir -p "$builddir" cp "$srcdir"/hello.qml "$builddir"/ xvfb-run -a /usr/lib/qt6/bin/qmlscene --quit hello.qml }
A check function drops a copy of a sample script from the Qt documentation into the build directory, and runs the qmlscene
executable on the sample script via xvfb-run. If successful, it will exit 0 when done, the script will proceed to the package function (not shown here) and complete the build. The check will fail if there is an error.
The test results indicated something seems to be broken on 32-bit ARM. The test segfaulted on armhf and armv7 while passing on the other arches. Set up VNC and also reproduced the segfault on an armv7 X server installation, where a window appears but the objects don't get rendered. Most of the testing was rebuilding qtdeclarative with different compiler flags to see if it would fix the segfault. No such luck yet. Tried running 2-3 KDE apps that make use of QML as well — segfault.
Fell off the codecycle, no mileage today.
Patched a ruby Rakefile to disable compiling Windows-only libs in an Alpine aport, consisting of a few lines which probably don't count as coding either.
By now readers may notice a pattern here, one that predicts December Adventure to be a non-starter for me this year due to lack of planning on my part. The initial objective was to set aside a few moments to install Nim on a dev box and fix two bugs in an IRC bot library. Realised a week in that there were other tasks would like to complete before the year ended, half of them seemingly from recent events pushing them further up the list. It's no excuse, only an explanation for why there is so little code in a post ostensibly about programming daily. Will probably continue logging up to and concluding on Christmas Eve, after the "Advent" in Adventure.
Bootstrapped Alpine Linux in a VM that could not boot the cached ISO from the virtual CDROM device.
Due to the default installer setup-alpine
loading itself into memory, it should be possible to do an in-place installation, but had not tried it before, since most providers either offer an older Alpine ISO among a list of operating systems that can be mounted or allow loading a custom ISO from their VPS control panel. In this instance an Alpine 3.20.3 virt ISO is listed but attempting to mount and boot from it caused the VM to go offline. Based on forum comments, this may be due to the ISO files not being found by the host node. It will likely be fixed eventually with other clients having a similar issue, didn't bother opening another ticket right after having requested and received an IPv6 assignment. Took the matter into my own hands in the meantime and started looking for a way to replace the existing Debian 11 template installation.
Found a pair of scripts by codonaft that did the trick well by fetching the virt ISO, mounting and copying the files into the existing filesystem, before adding a directive to grub for Alpine and to reboot to it. Once booted into Alpine with the installer available, the disk can be unmounted, wiped and replaced with a new installation. Modified it slightly to add gnupg to the packages installed in the first script (used to verify the ISO signature).
=> Installing Alpine Linux from another preinstalled GNU/Linux distro
Wrote a few trivial lines of shell script to generate QR codes with qrencode
for tilde.town zine cover art. For the 8th issue, the cover concept parodies a Magic 8-Ball toy in its packaging. The back cover includes a simple oracle table with 20 possible responsses like the Magic 8-Ball.
=> zine front cover | zine back cover | tilde town zine
The script loops through the list of replies, checks whether the line is a non-empty string and pipes each line of text to qrencode
to generate a SVG file in the current directory with a numerical filename starting at 1.svg
and incrementing. The result is 20 QR codes that can then be arranged in rows in the back cover layout with Inkscape. Readers ask their question, close their eyes and wave their finger over the squares for exactly 8 seconds then scan the QR code their finger stops at for the response. Cheesy, but no worse than talking to an inanimate object and expecting it to respond.
The use of QR codes in this case is contrived as the directions advise people to close their eyes when they "consult" the 8-Ball anyway. The deliberate format shift is a small reminder of how various low-tech toys have been digitised, sometimes in ways that shorten their lifespan or otherwise detract from previous versions. There are some cool kits that are hybrids of physical and digital despite the questionable lifespan of the digital component, e.g. LEGO Life of George.
Started writing a preparation shell script to extract the mini rootfs, kernel and some key packages. The directory can then be synced to USB or microSD media and the kernel image flashed to the boot partition. Since it will be an installation on external media, it will reuse the coreboot bootloader already on the C201, so there's no need to configure another bootloader.
Reinstalled and booted the new system multiple times to fix three issues:
linuz.kpart
from the zImage when running make
to build the kernel. A black screen could be a sign something was wrong with the vmlinuz
kernel in the top-level of the kernel build directory./etc/passwd
and a password in /etc/shadow with an entry copied from another filesystem worked./etc/runlevels/[runlevel]
directories of the new filesystem to enable them.Also changed my mind about the extra device trees in the kernel build script and will leave them in for now. Might clone it later to retain one for generic armv7 as a template and pare back this one for the C201.
=> linux-veyron-cross APKBUILD
Two more shell scripts, one following the previous one to be run at boot, and another to set a terminal motd (the message displayed when logging into the system). The script that runs at boot invokes apk fix
to help ensure the extracted packages get installed properly, such as triggering post-install scripts if any are present in the packages.
The motd script gathers some system information and writes it to /etc/motd
. The output looks something like this:
System Information Host : dobin OS : Alpine Linux 3.21.0_alpha20240807 Kernel: Linux 6.1.39 #3-google-veyron SMP Tue May 7 03:54:42 UTC 2024 armv7l Uptime: 22:21:51 up 1 day, 14:39, 0 users, load average: 0.16, 0.29, 0.40 Memory: 380 / 2006 M
=> setup.start | motd.start
Extended the prep script to fetch and customise the starting package set. Initially grabbed the set bundled with the u-boot files in the alpine-uboot tarball, but realised didn't actually need all of them since it includes 2-3 options for some services, e.g. chrony or openntpd for time server syncing. The tarball came with 103 packages. Replaced it with a set of 76 packages, including a few that would be typically be added to my Linux/Unix installs like tmux and vim. Also added copying user home directories to the prep script.
=> prepare.sh
Didn't do any IRC bot coding as initially envisioned, though did end up rebuilding a new kernel. It took 3 years for it to happen, but glad to finally have it done. The first log entry about setting up Alpine on the C201 was also coincidentally had "adventure" in its title, which definitely echoed the feeling of arriving full circle with an old internet (old by internet sense of time).
=> Alpine adventures on an Asus Chromebook C201 (2021)
Will be scripting the media partitioning and filesystem syncing steps sometime in the remaining week of 2024, and hope to settle into a new custom Alpine installation with a newer LTS kernel before or at the start of the new year.
Wishing everyone a wonderful holiday season and a prosperous new year!
Update 2024-12-27: re-uploaded the scripts to fix 1-2 issues and added the script to copy the root filesystem. Moved the motd script into a local .start script instead of running from /etc/profile.d
due to a permission error from running as a non-root user attempting to write to /etc/motd
.
=> flash.sh
Update 2024-12-28: re-uploaded the rootfs install scripts.
Update 2024-12-31: wrote yet another shell script yesterday to install a window manager/desktop workspace. Initially tried Sway/Wayland but a broken trackpad right-click and spotty IBus support on Wayland made it a non-starter for me. There is Fcitx5 as an input method editor/IME alternative, though IBus still fits my use case and haven't a need to switch. Back to i3wm/Xorg at this time. Writing this from a newly installed system on the C201, just in time for the new year.
]]>
</content>
<id>gemini://tilde.town/~mio/log/2024-07-13-occ-2024.gmi</id>
<title>
<![CDATA[Old Computer Challenge, July 13-20 2024]]>
</title>
<updated>2024-07-19T04:27:00Z</updated>
<author>
<name>mio</name>
</author>
<link href="gemini://tilde.town/~mio/log/2024-07-13-occ-2024.gmi" rel="alternate"/>
<summary>
<![CDATA[gemini://tilde.town/~mio/log/2024-07-13-occ-2024.gmi]]>
</summary>
<content>
<![CDATA[---
date: 2024-07-13T02:57:30Z
update: 2024-07-19T04:27:00Z
It's that time of year again. The Old Computer Challenge is back this year with a DIY edition inviting people to set their own conditions for the week-long event of using an old computer. Thanks to the organisers solene, Headcrash, prahou and Tekk for hosting the event, updating the communication channels and livening up community participation.
This year's parameters:
Potential activities:
More details about the Raspberry Pi can be found in the Wikipedia entry so they will not be listed here. The minor things to note are:
A 2 GB disk translates to 1.5 GB left for packages and home directory after installing the base system and boot partition. Once upon a time a 4 GB hard disk for a consumer-grade PC was considered roomy. Today it is common for application executables to run upwards of 70-100 MB alongside small packages 1-5 MB in size. Disk space could become an obstacle for an app hoarder like me.
This post will be updated over the course of the week.
=> i3 desktop at a resolution of 800 x 600 px over the RDP session
This is my third year participating in the challenge, and while reading about various intriguing plans others had in the #oldcomputerchallenge internet relay chat (IRC) channel, it occurred to me there is one point which may not have been conveyed clearly in my logs from previous years.
For me, the OCC is a glimpse into the future, not a return to the past.
One case for reviving older hardware is to better understand where modern computing stands in relation to its past, what older technologies did well or less well, and building on the positive aspects of personal computing in the present and future.
My adjacent interests in tildes and small net protocols stem from their potential to form stable communities for learning, mutual support, and exchange of creative ideas. Having functional and effective digital tools that enable collective participation and individual autonomy extends that potential.
Tinkering can be therapeutic, and there is nothing wrong with playing on a piece of old kit for a few hours while reminiscing about that first computer or handheld received as a birthday present. Retrocomputing can be an interesting hobby. It does not mean everything was better in bygone days.
My OCC forays have focused on optimising utility and doing the occasional odd thing to see how the device handles it, rather than the pursuit of an authentic experience of the original software/hardware as one package. There are the occasional brief moments of nostalgia, but my view may be in the minority who think it is more helpful to highlight active open software and projects, given they do not come with restrictions on disassembly and sharing for educational purposes which closed software with end user license agreements (EULAs) often have. Another factor is whether the devices will be connecting to the internet, and installing an updated system where available, since old software/firmware could contain security vulnerabilities as internal and third-party scrutiny shifted to newer versions and any serious bugs may remain unfixed. Airgapped devices including for a fully offline challenge or historical research are possibly among the few exceptions. However, people should be able to assess the risk for their circumstances — to each their own.
Initially my target device this year was a Nintendo 3DS, not a Raspberry Pi, but the current development status was not promising even if the device were to successfully scale the hurdles of triggering the browser-based exploit, installing Luma3DS, and assembling the boot partition and rootfs from 4 different tools. Non-working wifi put a damper on the plan and incidentally was also a recurring pattern in subsequent tests on the Raspberry Pi 1 B+.
Five days leading up to the event week were spent looking for a stable OS with wifi for the Raspberry Pi 1 B+, then for the Pi 3 B when the Pi 1 outfitted with a USB wireless adapter could not connect to a network, had no ethernet or monitor output, or fell into an unknown state where it was unclear whether it had booted at all.
My arbitrary criteria for OS selection may have made it more difficult to find a match. Alpine is still a decent option for Raspberry Pi 2+ devices (no armv6 support for Pi 1), but was looking to trial a different OS, also without systemd. (Not out of contempt for systemd, merely that other components it replaced tend to be simpler to configure without the complexity systemd is made to support.) Wireless connectivity is another requirement as the alternative would involve sharing a connection over ethernet with another computer that had an ethernet port and adding an extra hop, which is suboptimal.
The shortlist:
Impressed with the number of architectures it supports and for which it provides boot images. Regrettably it did not work out in the first attempt, maybe next time.
The OS of choice for the OCC founder and organisers. Previously enjoyed reading the OpenBSD zine and OCC seemed like a timely opportunity to see the BSD sights. The arm* images turned out to be unsuitable for my setup. Going with plain x86_64 in the future.
Had been looking forward to trying Chimera, which presents itself as a modern, simple to use, portable OS employing the latest developments in packaging and tooling, including utilities from FreeBSD. No luck so far. Maybe the image was not properly flashed to the card.
Was previously aware of Void as another Musl C distro alongside Alpine and Chimera, but had not planned on trying it this year until wsinatra reminded me of it. Thanks to his suggestion, the search stopped at this point and configuration could proceed on day 0.
/var/service/
which only exists as a directory at runtime, setting it up via ethernet only took a few steps and it has automatically reconnected after successive restarts.
The other two on the list were 9front (which wsinatra had also recommended) and RiscOS, but did not have enough time to attempt a test before the challenge started.
ip link set dev wlan0 up ip addr add [device-ip]/24 dev wlan0 ip route add default via [gateway-ip] dev wlan0
wpa_passphrase [ssid] [pass] >> /etc/wpa_supplicant/wpa_supplicant.conf
ln -s /etc/sv/wpa_supplicant /var/service/
sv up wpa_supplicant
Installed X server with i3 window manager and freerdp to run GUI apps on a one-off basis over a remote desktop protocol (RDP) connection, exiting a session afterwards to free up RAM. Also included a few GUI apps that one might find in a desktop environment, e.g. feh (to set desktop wallpaper for screenshots), i3status (status bar), rofi (app launcher). i3 likes to look for a default terminal emulator, xterm or similar will work.
xbps-install freerdp freerdp3-server xauth xf86-input-evdev \ xf86-input-libinput xf86-input-synaptics xf86-video-dummy xinit xorg-fonts \ xorg-server xterm xbps-install feh i3 i3status rofi xfce4-terminal xrandr
winpr-hash -u [rdp-user] -p [rdp-pass]
mkdir /etc/winpr echo "[rdp-user]:::[winpr-hash]:::" >> /etc/winpr/SAM
echo "allowed_users = anybody" >> /etc/X11/Xwrapper.config
Section "Device" Identifier "Configured Video Device" Driver "dummy" EndSection Section "Monitor" Identifier "Configured Monitor" HorizSync 31.5-48.5 VertRefresh 50-70 EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" DefaultDepth 24 SubSection "Display" Depth 24 Modes "800x600" EndSubSection EndSection
echo "exec i3" >> ~/.xinitrc
startx
/sam-file:/path/to/SAM
in the command to set the path.
freerdp-shadow-cli3 /sec:nla
rdp.sh
, make it executable (chmod +x rdp.sh
) and insert a few commands in it to start X and freerdp like a service.
#!/bin/sh # Usage: ./rdp.sh start|stop start_rdp() { nohup startx >/dev/null 2>&1 & nohup freerdp-shadow-cli3 /sec:nla >/dev/null 2>&1 & } stop_rdp() { pkill -f freerdp-shadow-cli3 pkill -f i3bar pkill -f i3 pkill -f Xorg } case "$1" in start) start_rdp;; stop) stop_rdp;; esac
Had ~938 MB left after setting up X server, freerdp, i3 and friends. Loaded the usual roster of CLI/TUI apps:
urls
file with a list of feed URLs although it is supposed to be possible to run the app by using the -i
flag to import from OPML, 2. scrolling down in an article with HTML shows a line of raw HTML at the top of the window that disappears when scrolling up, 3. occasionally stalls when fetching a feed, maybe a lower timeout setting will help.
/mouse enable
, /save
to enable mouse mode permanently, to allow selecting channels and scrolling channel history or nicklist with the mouse cursor). They might come in handy if most of the modifier key combinations are already bound to other applications on the host system.
xbps-install wireguard-tools
and wg-quick up [config]
. No need to manually probe the tun module, a step that may be required on some Linux distributions. There are no service files though so it might not automatically run at startup. It is adequate for a server that would not be restarted often.
Gave up the following apps for the week due to the required storage size:
Got 789 MB left if the package cache is cleared, a little over half of that otherwise.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/root 1.8G 1.2G 489M 71% / # du -h /var/cache/xbps 300M /var/cache/xbps
Faced again with the scenario of not having a good way to read and reply to comments on a GitLab CE instance from a terminal interface or a light browser. Maybe there are CLI/TUI clients now. A fallback to luakit or qutebrowser over RDP, if not entirely paralysed by lag. Further testing is in order.
Otherwise spent much of the time writing this post in vim over SSH.
Caught up on updating the post for the past few days including Day 0.
Tried to doodle on AzPainter, but that quickly fell apart due to considerable lag, aside from the lack of stylus pressure sensitivity. Most stroke do not appear until one to five seconds after the stylus was lifted. Buttons took three to ten seconds to register.
How about pixel art? Installed Grafx2, only for it to segfault on launch. mtPaint was less beset by lag, though not by much.
Vector? Xfig's interface started up immediately, and was as unusable as Azpainter for a similar reason.
Before giving up on the idea entirely, switched to running a VNC server for comparison — the difference was night and day. There may be an occasional half-second delay at the start of a series of actions, but otherwise the display was like a monitor directly attached to the device. Strokes in both AzPainter and mtPaint were much smoother, button and menu selection feedback were almost immediate.
=> Screenshot of mtPaint over RDP | Screenshot of AzPainter over VNC
xbps-install x11vnc
x11vnc -storepasswd
DISPLAY=:0 x11vnc -rfbauth ~/.vnc/passwd
The contrast between RDP and VNC may be down to whichever implementations of both happened to be available in the repos. Freerdp3 did seem CPU-bound, running between 40-51% CPU when there is activity, dropping to ~1-13% when idle. x11vnc was at ~0.23%, <1%.
Tried tigervnc server first but could not get it to run. After mapping a display to a user in /etc/tigervnc/vncserver.users and setting an access password, running vncsession [user] :0
resulted in a Failed to start session
error. Maybe it needed the $DISPLAY
variable passed in like x11vnc, as it is not automatically set without an actual monitor.
Two minor detracting points, also application-specific:
-forever
or -many
flag keeps the server listening indefinitely. In the shell script, set DISPLAY=:0
before invoking the x11vnc command with nohup.Kudos to Void and maintainer mobinmob for carrying mtPaint in the repos. Alpine has Pinta (the package there is currently orphaned), which Void does not, no mtPaint yet. As a paint program for a lower-spec device, mtPaint is the lighter choice. (Random aside: liking that xpbs-query [package]
includes the package maintainer in the information.)
Still very slightly off-kilter without stylus pressure, but it can be somewhat compensated with a few settings. In AzPainter, lowering the flow density in a water brush also helps emulate variable stroke opacity, which is a characteristic of different pressure levels. Stroke tapering is likely in the one of the windows that were minimised due to the miniature — compared with today's widescreen and 4K monitors — screen estate. A pale shadow of stylus pressure dynamic, but certainly better than nothing.
Here, have a doodle! It is not very good at all, but it might dispel the myth in the OCC IRC channel that noone participating in the OCC has created anything. And annoy a certain talented, temporarily xorgless artist who seems to relish drawing Tux small(er than Fugu) and cute.
=> Portrait of the Penguin as a Young Artist
Past the initial setup, using Void Linux has been more or less like other Linux distributions without systemd, if seemingly fewer service scripts. Have not had to dig into the inner mechanics like writing runit services and cannot comment on them, though did notice in the user handbook that runit has as user/rootless mode for services, which is possibly the one feature previously seen in systemd that would be handy to have in OpenRC (the service manager Alpine uses).
Found out to my dismay that the qutebrowser package in Void does not run with QtWebEngine, and seems to only be built with QtWebkit. Opting to proceed with QtWebkit brings up a warning about how outdated the latter was, made worse by browsing to the front page of the GitLab CE instance and see it only partially rendered. Fortunately Luakit despite being also Webkit-based still functions like the one in Alpine and was usable on a single-tab basis over VNC. Logging into websites needed a bit more setup as clipboard pasting is not enabled over the VNC connection, but possible e.g. by preloading credentials and retrieving them from pass. A full-fledged password manager might be required for 2FA/TOTP though. Since the Void contributor documentation has a policy of not accepting browser forks due to additional maintenance overhead, more privacy-conscious variants such as LibreWolf would not be available, which is unfortunate albeit understandable. Both Firefox and Chromium update frequently that keeping up with two major browsers is already effort.
Ran out of storage space while installing some development libraries to compile the mystery app[1]. Could uninstall everything else and put them back later to temporarily recover some space, but the chances of success is low due to missing three other dependencies from the repos. Activity idea discarded, however:
# rm -r /var/cache/xbps/* # df -h Filesystem Size Used Avail Use% Mounted on /dev/root 1.8G 1.1G 554M 67% / # xbps-install [packages] [...] Size to download: 153MB Size required on disk: 558MB Space available on disk: 659MB
Does something look funny? Somehow xbps found an extra 105 MB on the partition then lost it mid-install, instead of stopping immediately as it has done in other scenarios. If in doubt, df has the more reliable number.
[1]: The pixel art editor LibreSprite. No mystery there given activity #1, but congratulations if you guessed correctly.
Tried to run Goxel, a voxel editor, and got two errors:
error: XDG_RUNTIME_DIR is invalid or not set in the environment. glfw error 65542 (GLX: GLX extension not found) Segmentation fault
export XDG_RUNTIME_DIR=/run/user/$(id -u)
xbps-install mesa-dri
(enables swrast_dri.so to be found)
Sadly not very usable on the Pi 3 — CPU usage ratchets up to 375% and add/remove block operations lag, even if memory is at a manageable ~120-150 M for a small file. The application consumes 25% CPU on another armv7 device where it is considerably more responsive, so the issue may be a CPU limitation. Otherwise a nifty voxel editor on a faster CPU.
Found a TUI "drawing" app that can be used to create ASCII art called draw. So far it works best over xterm if accessed over SSH — running it in another terminal or tmux can cause it to be unable to detect characters properly and litter alt[
and other unwanted control characters onto the canvas. Compose key characters can be used too, though not non-Latin characters.
Uninstalled Go after compiling the program to conserve storage space. Probably needed a bit under 300 MB, most of it for the Go compiler package itself. The resulting executable is 3.1 MB, small enough to keep around.
=> OCC (draw test) | Bird (draw test)
xbps-install go
go install github.com/maaslalani/draw@latest
~/go/bin/draw
Usage instructions are in the source repo readme.
Came upon two more ASCII and ANSI art editors, aewan and durdraw. The draw program from the day before still acts up with stylus input on xterm, and the other two do not work with the stylus although both TUIs are mouse-aware. Block by block is probably how drawings were meant to be made. Aewan can be used to make animations and export the "frames" as individual text files inside a gzipped archive (have not tested the feature). Durdraw has a cool logo, UTF-8 support and different colour modes among other features.
The applications are not in the Void repos, but building them from source is straightforward after a bit more disk space shuffling.
xbps-install gcc make ncurses-devel zlib-devel
tar -zxf aewan-1.0.01.tar.gz cd aewan-1.0.01 ./configure make
make install
with superuser privileges or retrieve just the aecat, aemakeflic and aewan executables
xbps-install python3 python3-Pillow
git clone https://github.com/cmang/durdraw.git cd durdraw
python3 -mvenv --system-site-packages venv . venv/bin/activate pip install --upgrade . deactivate
~/durdraw/venv/bin/durdraw
Revived a dubious tradition from previous year's OCC of picking a game and playing a session. Selected a random game from the awesome-tuis list called Brogue CE. Rarely played dungeon crawlers and had not configured w3m to open images, so only realised after building from source and starting the game that it had striking similarities with angband which was already in the repos. However, it does have interesting features in the form of more seeds and different modes. When running in terminal mode over SSH, avoid starting it inside tmux as it causes a spike in tmux's CPU usage from negligible to 38% on the Pi. Brogue CE itself ran at around 25% CPU, and lagged when the character is surrounded by four or five creatures, but was playable overall.
Did not make it past level 4 on the first attempt — got swarmed by jellies. It would have been almost cute, perishing by lag aside.
=> rothgar/awesome-tuis | Brogue: Community Edition gameplay differences | More information about the Linux version of Brogue
=> 2024-07-13-occ-2024/brogue-ce-splash-screen.png | 2024-07-13-occ-2024/brogue-ce-start.png
xbps-install gcc make ncurses-devel SDL2-devel SDL2_image-devel
mesa-dri
if running from X server: SDL SDL2_image
pkgver=v1.13 curl -LO https://github.com/tmewett/BrogueCE/archive/v1.13/BrogueCE-$pkgver.tar.gz tar xf BrogueCE-$pkgver.tar.gz cd BrogueCE-$pkgver make TERMINAL=YES
./brogue -t
Closing out the event week with a few thoughts.
CPU was the more noticeable bottleneck this year, which was to be expected as the older Pis are not known for good CPU performance, though the Pi 3 will still compile source for small applications without issue.
Did not reach 512 MB in memory usage at any time during the week. It was mostly spent in TUI over SSH.
The disk space limit started to bite when building programs from source. Had to do considerable package rearrangement to free up enough space for Brogue CE's SDL2 and SDL2_image-devel dependencies (~563 MB) on top of gcc tooling with kernel headers (~215 MB). Git alone was 44 MB with dependencies, which combined made up just over half of the available storage. Removed X server, i3 and all GUI applications, clearing the package cache manually in the process. Had ~15 MB of free space left after building Brogue CE. The game executable itself was 1.3 MB. Fortunately SDL2 and SDL2_image (not the development files) were only 2 MB combined. Making installation of docs or manpages optional (as in Alpine) or a package manager flag to skip docs would have been helpful and saved ~50 MB of space.
On the topic of package manager, there were a handful of things about xbps that were inconsistent or slightly annoying:
xbps-query -R *brogue*
runs a search but if the search term is capitalised after the wilcard character, it does not work, e.g. xbps-query -R *Brogue*
.
xbps-install
does not automatically check whether the repodata has changed and keeps aborting when package signatures could not be found. This happens daily if installing new packages, often enough that it has a mention on the website FAQ.
df -h /
. Practically this means that while it usually declines to install if there is insufficient disk space, there are instances when it estimates more space than is available then throws an error and exits mid-install due to no space remaining.
-R
flag. Most if not all package managers should be able to carry this out as part of dependency management.
=> xbps-remove -R does not remove all dependencies when removing multiple packages
Nevertheless, due credit to Void for simplifying setup for the Pi, providing both boot images and rootfs tarballs.
The smaller storage space led to a preference towards small compiled executables where the build toolchain and development headers can be removed afterwards to make room for other packages, and fewer applications that require several libraries to remain installed at runtime. It also meant being less likely to build applications that fetched dependencies with language package managers like cargo or npm if the lockfile holds a long list of packages.
The most pleasant surprise was x11vnc. Had not used VNC in some time and initially factored in a two to three-second latency over a local network that would be especially apparent in drawing applications, but the resulting connection ran better than anticipated. Finding more drawing programs and revisiting two others was another highlight to the week.
While there were no particular goals this time for the Pi 3, it was good to see the device is still supported by multiple OSes. Hopefully the preparation issues were temporary and the support will translate to better luck with a BSD next time.
]]>
</content>
<id>gemini://tilde.town/~mio/log/2024-06-01-tilde30.gmi</id>
<title>
<![CDATA[tilde30, June 1-30 2024]]>
</title>
<updated>2024-06-30T03:39:51Z</updated>
<author>
<name>mio</name>
</author>
<link href="gemini://tilde.town/~mio/log/2024-06-01-tilde30.gmi" rel="alternate"/>
<summary>
<![CDATA[gemini://tilde.town/~mio/log/2024-06-01-tilde30.gmi]]>
</summary>
<content>
<![CDATA[---
date: 2024-05-31T19:30:46Z
update: 2024-06-30T03:39:51Z
Tilde 30 is a event that involves picking a project or series of related activities and doing them over the course of thirty days. The main premise is to set weekly milestones or goals to complete them. More details:
=> tilde30.txt
Townies can also see a list of what people will be doing in the "tilde30!" thread on bbj, the town bulletin board. Thanks to ~elly for organising the event.
My tilde30 mini-project will be a static gallery gemerator. Given a folder of images and metadata text files, write a CLI utility to generate static pages for a simple web gallery.
Week 1: learn enough of a programming language to make mischief
Week 2: model config settings, parse sample config and image information
Week 3: write logic to source thumbnails, generate HTML from a template
Week 4: template sample gallery theme
Stretch goal: add HTML/CSS to the forforthcoming stats feature in Katalogo. Katalogo is a webring server by ~durrendal. Timing will depend on feature progress and remaining time. Templating can be done more effectively when the backend returns the sample data to be displayed.
=> Katalogo
When last checked several years ago, most web gallery generators were either written in PHP, content management systems that handled embedding multimedia, or application servers that would not be easily made publicly accessible outside of ~town. Static HTML pages were less resource-intensive and could adequately display small image collections. The closest static site generators with a gallery component were Sigal and Nikola. Sigal was centered around generating gallery pages and image thumbnails, configuration and theming were straightforward, though it did not handle adding other pages that were not part of a gallery. Nikola had a gallery view alongside regular pages and other features, but the image lightbox view did not support multi-paragraph captions. My fix at the time was to combine the gallery built through Sigal with a shell script to output extra text-only pages, both sharing the same HTML theme. The arrangement worked, if clunkily.
=> Sigal — Simple Static Gallery Generator | Nikola — Static Site Generator
One or two months ago, someone asked in ~town chat if there was a gallery generator installed. By then had written a single-page generator that could serve as a gallery feature or subsite, but the idea of producing the gallery applet that should have existed years ago lingered. In twenty-nine days, the folly of this idea will be revealed.
=> znic: 1-page webzine generator
Update 2024-06-23: found imgram, a shell script HTML gallery generator while browsing a cluster of websites adjacent to the Old Computer Challenge gateway. It is closer to an imageboard without comments, having a Tumblr-like layout with support for tags and pagination. The author, prahou, kindly pointed me to the source (link below). A good option with a RSS feed.
=> imgram | Old Computer Challenge
The sections below will be updated through the month.
Start: 2024-06-01
Week 1: 2024-06-08
Week 2: 2024-06-15
Week 3: 2024-06-22
Week 4: 2024-06-29
End: 2024-06-30
~/.plan
and ~/.project
files in ~town, in accordance with the event guidelines. It is added as a remote SSH command to my gemwriter capsule config, which asks the utility to refresh the dotfiles on publish or update. The script helps me log from Gemini while synchronising the .project
file. It should work with other plain text formats beside Gemini with a few config adjustments.
.plan
and .project
files would otherwise remain unused, and the script in its current state is adequate for my use case. Less time spent on ancillary tooling, more time available to do the project. Others are welcome to modify it for their needs if they wish.
~mio/bin/t30
to their own bin/
and run it from there. Example config for this gemtext at ~mio/.t30.json
.
=> t30.nim
zig fmt [file]
. Could not reproduce the issue running the command standalone after the first time it hung there. No async library in recent versions, but not a requirement for the project. Would take a closer look at Zig in a future project.
# Check Helix's language server detection and grammar support. # Enable Go in the use-grammars list in ~/.config/helix/languages.toml. hx --health go hx --grammar fetch hx --grammar build # Install Go compiler and language server. apk add go gopls
=> A plain HTML page with 2 rows of 6 square thumbnails resized from illustrations
Lesson recap:
=> Advantages to JSON as []byte instead of string
reflect
package which works on exported fields. If the fields are unexported, the result will be an empty JSON object, {}
.
draw
, one in the standard library and the other a drop-in replacement for the former. Both package pages point to the same introduction article but one of them is missing key features, e.g. the standard library draw
does not have the Copy
function. In the event of an undefined: draw.[...]
error when attempting to call one of the scaling variables or Scale
function, check the import statement to ensure the correct package is imported — golang.org/x/image/draw
, not image/draw
, which has fewer functions.
=> image/draw (standard library) | golang.org/x/image/draw
go.mod
file because go get
no longer installs packages outside a module. Another thing to note is if the project directory is not within GOPATH
(by default $HOME/go
or check the current value with go env GOPATH
), either move it, or pass in a module or project directory name for the file init command to work.
# Generate a go.mod file. go mod init [module] # Fetch the package. go get golang.org/x/image/draw # Add import path to source. import "golang.org/x/image/draw"
func foo(param ...string)
syntax), which allow for empty slices including not passing a value. Depending on use case, it can act like an optional parameter and is fine as long as the function only acts on the same number of elements as present in the slice. It may yield unexpected results (additional elements are ignored) or leave cruft unchecked if more elements are passed in than the function handles, as the compiler would not warn of a mismatch.=> Default value in Go's method | Proposal: add limits to variadic definition | Why does Go not support overloading of methods and operators?
float32(600/800)
and float32(600)/float32(800)
are different. The first results in 0, the other 0.75.
os.ReadDir
and apply os.Mkdir
, os.Read
and os.Write
to copy each file's contents accordingly.
=> golang/go os: add CopyFS | 3 ways to copy files in Go
image
: functions for calculating image crop size and resizing.template
: functions to generate gallery pages.util
: a mix of supplementary functions unavailable in the standard library, e.g. multiple substring replacement for strings, boilerplate for copying files, sorted file lists.=> A plain HTML page with two sets of image thumbnails
Lesson recap:
sort.Strings
, which as of Go 1.22 is an alias for slices.Sort
. sort
does not have a string reverse sort, its Sort
and Reverse
are for interfaces — use slices.Reverse
.
strings.Title
was deprecated in Go 1.18 and the documentation recommends using the golang.org/x/text/cases
package.
=> strings.Title (deprecated) | cases.Title
go run main.go
gives an error like
### Week 3 * Added basic navigation links to the image page template. Each image description page may include "next", "previous" and "top" links to more easily browse through a set of images. A bug was found wherein the next/previous links were incorrect in 1-2 of the image pages, to be fixed later. * Included CSS files in the album output function. * Began applying basic CSS styling to the set page. * Added a CSS lightbox view, which loaded individual image pages on top of the thumbnails. This allowed for viewing image details, then hiding the overlay again to select another thumbnail. The feature still needs minor adjustments at the moment — the navigation links on the page template should hide the "top" link since there is already another link to hide the overlay and return to the thumbnails. => 2024-06-01-tilde30/set-page.png A page with the image set title and description at the top, followed by 3 rows of 4 thumbnails centered on the page => 2024-06-01-tilde30/set-page-lightbox.png Same page showing a lightbox view, with a translucent white layer over the visible area and a larger image of a paper tilde on top, the title and short 1-line description below the image Lesson recap: * When attempting to convert a single digit `int` to a `string` by passing it into the `string` function, the linter will warn that the `conversion from int to string yields a string of one rune, not a string of digit (did you mean fmt.Sprint(x)?)`. In a future version this will become an error. Workaround: convert using `strconv.Itoa()`. => https://github.com/golang/go/issues/46597#issuecomment-855485007 cmd/vet: don't complain about int to string conversion => https://stackoverflow.com/a/10105983 How to convert an int value to string ### Week 4 * Made lightbox mode responsive. * Wrote a brief 1-page document with usage instructions. * Fixed bugs: 1. CSS styles not copied when the `make` option is invoked the first time. 2. Top-level index.html being output to the source directory top-level when the config value is empty. 3. Runtime error when there is only one image in a set. Also hid the lightbox navigation links in this case. 4. Check for duplicate index page results in the index page being renamed when the set page has a description file. 5. Image descriptions not displayed after adjusting the description file path check. * Embedded a plain sample theme with the executable to make it easier for users to get started. * Added a project demo page. * Backfilled doc comments for functions. Lesson recap: * `strings.Replace` will insert the replacement substring at the beginning of the origin string if there are no substring matches. A workaround to prevent accidental insertion is to check whether the origin string contains the search substring before replacing. * For the `embed` package, the `//go:embed` directive cannot be used inside a function. The linter will warn even though the statement does precede a `var` declaration inside the function. Apparently embedding locally was removed due to poor interaction with byte slices. => https://github.com/golang/go/issues/43216 embed: remove support for embedding directives on local variables => https://stackoverflow.com/a/28071360 Bundling static resources => https://pkg.go.dev/embed embed package * Two things to be aware of with paths in the `//go:embed` directive: 1. Variables cannot be used in the `//go:embed` directive. Oddly, the linter did not notify of an error.
//go:embed filepath.Join(SampleThemeDir, "*")
template/template.go:196:3: invalid quoted string in //go:embed: )
template/template.go:196:3: usage: //go:embed pattern...
//go:embed themes/nettle/*
2. The path is relative to the package root directory. If the directive is invoked from a subdirectory and the files are in another adjacent subdirectory, either move the files to a location inside the same subdirectory, or create a `config.go` file and embed the files from there. => https://blog.saintmalik.me/embedding-static-files-in-go-cli/ How to Use //go:embed to embed static files in CLI ### End => https://git.tilde.town/mio/lamium Source repo => https://tilde.town/~mio/lamium Project demo page * Pushed changes over HTTPS as `git [push|pull]` over SSH did not work for me at the moment. The error was `exec request failed on channel 0 [...] fatal: Could not read from remote repository.` No change to SSH keys or config for git.tilde.town. A quick web search indicated it could be an app server issue from the git user spawning too many processes than allowed in `ulimit`. => https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8209#note_1873981796 Problems with fetching repos via ssh after updated from 16.1.5 to 16.3.3 * Left a build in town at `~mio/bin/lm` in case anyone might be interested in trying it. ]]>gemini://tilde.town/~mio/log/2023-10-31-embed-an-image-in-groff.gmi 2023-10-31T05:32:00Z mio " cover.ms
" [page-width] is the page width, e.g. 14.8c
.nr HM 0
.vs 0
.po 0
.ll [page-width]
.PSPIC "image.ps" [page-width + 0.1c]
Groff command
Replace [page-size] with the paper size, e.g. a5
groff -P-p[page-size] -ms cover.ms -Tps | ps2pdf - output.pdf
Below is a longer answer. ## Background Recently a brave townie (not me) volunteered to embark on a quest to produce the pdf layout of the tildetown zine while wielding the Grand Goblet of Groff. In a trivial gesture of support, a zine cover image was handed off to this heroe, along with an unsolicited suggestion for how said image might be embedded into the rest of the document, because shady merchants selling a mission know to throw in a freebie to sweeten the deal. Actually, preparing a suggestion took more time than illustrating the cover pages. The only things yours truly knew about roff before this were that a few townies used it in some incarnation ([g|n|o|s|t]roff), and others to format man pages. 1% of the "why" and 0% of the "how". This note might be 0.001% of the "how", but maybe it will save someone else a bit of hassle. ## pspic and pdfpic A macro is a group of requests or instructions for formatting a document. According to a groff cheatsheet, there are two macros for handling images. 1. PSPIC This allows embedding a Postscript image, but only supports html/xhtml/ps/dvi output. This should be fine for a web version, but for a pdf, it would be an extra step to convert the html or ps back to pdf. If the original image is in another format, e.g. jpg, then it would have to first be converted to ps or eps.Install ghostscript for the ps2pdf utility
Optionally install [image|graphics]magick to convert to ps from another format
$ doas apk add ghostscript
Using imagemagick
For graphicsmagick, use: gm convert [...]
convert cover.jpg cover.ps
Example calling PSPIC inside a ms macro file
$ echo '.PSPIC "cover.ps"' > cover.ms
$ groff -P-pa5 -ms cover.ms | ps2pdf - output.pdf
The `-P-pa5` flag tells the postprocessor to set the paper size to A5. The default size is US Letter. 2. PDFPIC This allows adding pdf pages, which in turn can contain images, alongside ones typeset with troff. Oddly, `gropdf` appears to be broken in the Alpine package for armv7 — the executable exists but doesn't run. The `not found` error commonly happens when there is a device architecture mismatch, e.g. trying to run a x86_64 binary on an armv7 device. Subsequently found it worked on a x86_64 box.Install poppler-utils if not already present,
otherwise it may throw an error like the following:
sh: pdfinfo: not found
pdfpic.tmac:cover.ms:1: error: retrieval of 'cover.pdf' image dimensions
failed; skipping
$ doas apk add poppler-utils
$ echo '.PDFPIC "cover.pdf" 14.8c 21.0c' > cover.ms
$ groff -ms cover.ms -Tpdf > output.pdf
groff: error: couldn't exec gropdf: No such file or directory
pdfpic.tmac:cover.ms:1: error: use of PDFPIC requires GNU troff's unsafe mode
(-U option)
$ whereis gropdf
gropdf: /usr/bin/gropdf
$ /usr/bin/gropdf
-ash: /usr/bin/gropdf: not found
$ ls -la /usr/bin/gropdf
-rwxr-xr-x 1 root root 82245 Jul 29 18:59 /usr/bin/gropdf
Unsafe mode? From the GNU Troff manual: > Operate in unsafe mode, which enables the open, opena, pi, pso, and sy requests. These requests are disabled by default because they allow an untrusted input document to write to arbitrary file names and run arbitrary commands. Enabling this is probably a terrible idea if the pdf is from an unknown source. Not sure why it was necessary to implement the feature in a way that allows running arbitrary commands. It could be a legacy from a more innocent era. The choice of which to use may depend on trusting that townies are good and smart beings who will not arbitrarily spring a malware payload inside a pdf submission. ## Aside: Xfig and pic How about generating graphics with the troff family of tools? There is a `pic` preprocessor, but writing procedural source without some form of live preview is a pain. Fortunately, Xfig is a X drawing application that could export to a variety of formats including eps, ps and pic/tpic. Theoretically, it should be possible to draw an image with Xfig, save it as pic and insert the instructions into a troff file.Install Xfig, and fig2dev for .png and .pdf export
$ doas apk add xfig fig2dev
In terms of feature set, the application is closer to Dia than Inkscape, more geared towards flow charts and schematic diagramming. It comes with a small library of objects for areas like electronic schematics, desktop app wireframes and origami(!) diagrams. There is a compound object feature which, like Inkscape's object group/ungroup, allows objects to be moved and scaled as if they were one large object. A few limitations: it only supports a small set of fonts, mostly old and non-free, e.g. Times New Roman and Helvetica, and getting it to recognise the corresponding font files is a guessing game unless imported straight from Windows/Mac systems. Object rotation is locked to certain angles, could not activate free rotation. There does not seem to be any option to set the paper size from within the user interface, though it is easy to change it by opening the save file in a text editor and replacing the default Letter size with another preconfigured size from the table in the manual. The rough edges: tooling windows overflow vertically with no scrollbars when the zoom setting is set higher than 1.0, which causes any buttons at the bottom to be inaccessible. Text input fields in the windows tend to get stuck, requiring several clicks inside the field and pressing backspace 2-3 times to change numerical values. Objects are often shifted or out of alignment in the exported file from the on-screen display. Grouping objects then moving the compound object does not preserve the original order of visibility of the objects. To get around it, each object's depth could be set to a different value before grouping. The default value is 50. A lower number makes an object appear in the foreground, and a higher number puts it further in the background. It started to slow down considerably when using transform-type tools (edit, move, scale, remove, etc.) after ~20 objects, as it tried to display the nodes of each object for selection. One of the files had 99 objects. For some reason it struggled for minutes with each tool selection, and the window manager was also unresponsive for long stretches. Suspecting an iowait bottleneck from a system running on sdcard media. It consistently pegged 1 core (of 4 cores) whenever it happened and unblocked itself long enough for me to bring up htop to check, though it used little RAM. Ended up testing settings in a different buffer file and editing the file in a text editor to apply the changes. The other setback was tpic export did not work for me following the tip on the unofficial groff cheatsheet page to paste the source between the `.PS`/`.PE` block. Groff treated the source as literal text when run from one device regardless of whether the content was from pic or tpic format. Maybe the groff command was missing the preprocessor flag `-pic` at the time. On another, groff did not recognise the colour assignment in the tpic syntax. Attempting to embed the pic file, such as `.PIC -C -I 0 "cover.pic" 14.8c 21.0c`, caused the groff command to hang and the output file was 0 bytes in size. While it was fun trying out a new application, it became almost unusable for a drawing with even slight complexity. Wrapped up a cover in Xfig, but it was back to Inkscape for the next idea. ## Image layout Embedding the image was easy, but having it fill the page took more searching. This is partly due to 1) unfamiliarity with the pipeline, how the options were grouped, which options were groff-specific, and 2) some of the options either had no examples or did not work as anticipated based on brief, often one-line descriptions. Eventually found a combination of options between the manpage of groff's implementation of memorandum macros (mm), the Troff User's Manual and the GNU Troff Manual in the case of manuscript macros (ms). In an example file:.nr HM 0
.vs 0
.po 0
.ll 14.8c
.PSPIC "cover.ps" 14.9c
The first line sets the header margin to 0. `HM` is a ms register, and a similar effect can be obtained in mm by resetting the top of the page, overriding the default header: `.TP` In both cases, request must be set on the first line of the file without any comments above it, or it will not work. The next line, `.vs 0`, sets the vertical baseline spacing to 0 and in conjunction with `.TP`, affects the visible spacing between the top edge of the page and the first line of content. `.po` or page offset controls the left margin (there is no right margin setting), and is also set to 0. `.ll` stands for line length and in this case, spans the entire length of the page, 14.8 cm (an A5 page is 148 mm x 210 mm). Supported units include centimeters (c) and inches (i) among others. Then `.PSPIC` or `.PDFPIC` can be called with the image file. The `14.9c` is not a typo — something about the way ps and pdf files are rendered results in a <0.5 mm gap on the right edge of the page. This also happens when images are exported from Inkscape with the exact document size and re-imported. The ps `%%DocumentMedia` field is set to A5, and the evince pdf viewer likewise reports the size as A5. As a workaround, setting the width 1 mm wider in `.PSPIC` closed the gap. There is also a vertical margin option , as in `.vm 0 0`, found in vol. 2 of the Unix System V Documenters Workbench manual, which initially sounded like the most obvious parameter to adjust, but it did not work to remove vertical margins, only added to a baseline setting. ## Next directions * Find out why the Alpine groff package is broken on armv7, e.g. whether it was something specific to the system installation, or try to rebuild the package from source and see if the error is reproducible. * Look into other troff implementations? => https://heirloom.sourceforge.net/doctools.html Heirloom Documentation Tools => https://github.com/aligrudi/neatroff Neatroff ## Sources => https://l04db4l4nc3r.github.io/groff-cheatsheet/examples/preprocessing/photos/ groff cheatsheet — graphics and photos using GNU troff => https://lists.gnu.org/archive/html/groff/2010-04/msg00042.html Embedding pictures or images (groff mailing list) => https://stackoverflow.com/questions/75990034/how-to-convert-a-pdf-file-to-letter-size-using-groff Convert a pdf file to Letter size using groff => https://www.gnu.org/software/groff/manual/groff.html.node/ms-Document-Control-Settings.html Document control settings (Groff manual) => https://lists.gnu.org/r/groff/2012-03/msg00014.html Placement of PSPICs (groff mailing list) => https://mcj.sourceforge.net/frm_options.html Xfig PRINT and EXPORT settings => https://linux.die.net/man/7/groff_mm groff_mm manpage => https://stackoverflow.com/questions/77329212/setting-top-margin-in-groff-to-zero-using-mm-macros Setting top margin in groff to zero using mm macros => https://www.troff.org/54.pdf Troff User's Manual => https://lists.gnu.org/archive/html/groff/2021-05/msg00033.html Is decreasing top margin with mm macros broken? => http://www.bitsavers.org/pdf/altos/3068/690-15844-001_Altos_Unix_System_V_Documenters_Workbench_Vol_2_Jul85.pdf Unix System V Documenter's Workbench Volume Two ]]>gemini://tilde.town/~mio/log/2023-07-10-occ-2023.gmi 2023-07-17T03:05:00Z mio gemini://occ.deadnet.se Old Computer Challenge => https://www.cnet.com/reviews/samsung-n150-11-review/ Samsung N150-11 review on CNET ## Day 1 The most noticeable bottleneck seems to be the CPU. Firefox chokes out of the box with loads of 8.00 and higher despite having over 100 MB of memory still available, though the load could also be caused by iowait. Qutebrowser is somewhat usable for a few open tabs and Javascript disabled. Luakit is another light option, but gets fewer browser engine updates than Qutebrowser, which may be a security concern for some people. Midori was either unstable or had rendering issues whenever I tried it, and none of the other lightweight GUI browsers like Dillo or K-meleon came close to rendering pages completely, especially on Javascript-heavy sites. OCC organiser Solène linked to arkenfox/user.js in the #oldcomputerchallenge IRC channel, which applies some performance adjustments and security settings to Firefox. It remains to be seen if it will make any difference on this netbook. In the meantime, w3m is great for text-based browsing and also supports gopherholes. Amfora has a solid interface for viewing gemini capsules. Qutebrowser is only opened for sites that really require full rendering. Memory usage is 90-123 MB, which varies depending on the number of vim instances open. Sometimes it is a GUI file manager, an image or PDF viewer. At a given moment there are 2 xterm windows under i3 (the window manager) — one for ssh, and another with tmux and several panes for w3m, amfora, vifm (file manager) and vim. => https://github.com/arkenfox/user.js arkenfox/user.js ## Day 2 Mixed news. The no-good bit — I tried the Arkenfox user.js with a Firefox ESR 112 profile. It successfully applied a template of settings, but there were no noticeable performance improvements. The browser stutters and becomes unresponsive for minutes at a time over simple interactions like typing in a URL in the address bar despite turning off search engine autosuggest. To be fair, Qutebrowser is also slow to start and crashes on some sites with Javascript enabled under the OCC setup, but it could manage 3-5 tabs if Javascript is disabled. The terminal output from firefox-esr shows a few errors:
Crash Annotation GraphicsCriticalError: |[0][GFX1-]: glxtest: libpci missing (t=184.963) [GFX1-]: glxtest: libpci missing
Crash Annotation GraphicsCriticalError: |[0][GFX1-]: glxtest: libpci missing (t=184.963) |[1][GFX1-]: vaapitest: ERROR (t=185.16) [GFX1-]: vaapitest: ERROR
Crash Annotation GraphicsCriticalError: |[0][GFX1-]: glxtest: libpci missing (t=184.963) |[1][GFX1-]: vaapitest: ERROR (t=185.16) |[2][GFX1-]: vaapitest: VA-API test failed: failed to initialise VAAPI connection.
(t=185.164) [GFX1-]: vaapitest: VA-API test failed: failed to initialise VAAPI connection.
The libpci error went away after installing pciutils-libs (or pci-dev in some distros), but installing intel-media-driver, libva-intel-driver and gst-vaapi as suggested in two Reddit threads didn't satisfy the vaapitest. This little diversion involving Firefox was brought on while looking for the fastest way to turn a Wikipedia article into PDF. Firefox's "print to PDF" feature saves the pages with the source URL helpfully included, whereas Qutebrowser saves pages as images, which mostly preserves the visual style of pages but the text cannot be selected. There used to be another utility called wkpdftohtml, but the source repo is archived and it does not appear to be in development anymore. Another option is pandoc to convert between various text formats, and is available in the Alpine repos for x86_64. Actually, in the case of Wikipedia, the fastest way is to select the "Download as PDF" option under the "Print/export" menu on the article page. The good news: DOSBox works! It averages ~176 MB RAM/~45% CPU while running SimCity 2000. Mouse movement is a bit fast but poses no issues. Initially Sim Tower was on the menu, but it required Windows 3.1 or another old version, and wrangling with Wine is also not on order for OCC week. DOSBox can run other titles, and there are native Linux games. => 2023-07-10-occ-2023/sc2k-1.png SimCity 2000 title screen in DOSBox => 2023-07-10-occ-2023/sc2k-2.png SimCity 2000 — new town? => 2023-07-10-occ-2023/sc2k-3.png SimCity 2000 — town in the year 2050??? Memory usage when not running DOSBox remains steady at ~104 MB, or ~131 MB if counting 27 MB from weechat (IRC client) and neomutt (mail client) being offloaded to a ssh session. The data is stored on another box and it made little sense to add individual directory aliases to remote mounts only for a week for running the two applications on the netbook. ### Links => https://teddit.net/r/firefox/comments/10rtkhz/vaapi_test_failed_failed_to_initialise_vaapi/ Reddit thread on the vaapi error from Feb 2023 => https://teddit.net/r/archlinux/comments/xt4t2m/anyone_having_issues_with_firefox_after_update/ Reddit thread on the vaapi error from Oct 2022 ## Day 3 After I mentioned to wsinatra the previous day had been spent fiddling with DOSBox, he suggested trying Elder Scrolls II: Daggerfall to see whether it would run. The first step was the hassle of downloading the game. The official website has released it at no charge, but the Javascript-based site does not load at all in w3m/lynx. The Bethesda website did load in Qutebrowser (before the browser segfaulted while scrolling down the page) and in Luakit after waiting 4-7 minutes each time for the browser to launch then render, and unfortunately the link was to a Steam version. That was a non-starter for the available memory. Eventually found another copy, and after extracting a .bin and .cue file (renaming them to be more Unix-friendly), ran the installer in DOSBox:
mount c /path/to/install/dir -freesize 1024
imgmount d /path/to/daggerfall.cue -t iso -fs iso
d:
install
The first command maps a system path to the C: drive and sets the free space on the virtual disk to 1 GB. The next ones mount the .bin image to the D: drive, switches to it and runs the installer. Following the install prompts, the "Huge installation (450 MB)" install size was selected and the game was unpacked to the default install path at `C:\dagger`. There was a sound card configuration step at the end of the installation, but I did not set up system audio to be able to test on the fly whether the game music/sfx actually plays, since the role of music player has been mostly relegated to a mobile device. It turned out to be a small loss for experiencing the game when there was an animated cut scene as the story starts with no daialogue captions. Users have reported game sounds working with SoundBlaster 16. In previous experience with SimCity 2000 on another computer, sound worked automatically with the system's Pulseaudio set up. The DOSBox wiki also great documentation for the emulator features, commands and instructions for specific game titles. After installation, the game still required the D: drive to be mounted to be playable. Running `dagger.exe` inside the install path starts the game. Had minor trouble at first with some mouse clicks not registering while setting up character stats, but this may be due to running on an unoptimised CPU cycle (the default 1k instead of another number like 10k) and should be fixable by editing the user's DOSBox config file. Once the first chapter and tutorial began, the mouse was responsive enough and the issue seems to have disappeared. => 2023-07-10-occ-2023/daggerfall-install-ad.png Much anticipated game coming soon … in 1997 => 2023-07-10-occ-2023/daggerfall-character-setup-map.png Character world map => 2023-07-10-occ-2023/daggerfall-character-stats.png Character stats => 2023-07-10-occ-2023/daggerfall-chapter-1.png Obligatory open book scene => 2023-07-10-occ-2023/daggerfall-opening-scene.png Play the game or we shall set your CPU on fire => 2023-07-10-occ-2023/daggerfall-opening-story.png Opening story => 2023-07-10-occ-2023/daggerfall-gameplay.png Game view at 640 x 400 px Initial impression of the game is positive — there is a decent range of classes and the stat chart is presented in a compact and legible layout. My personal preference is for isometric or almost top-down map scene rather than first-person view, to take in more of the surroundings, but others may find the close-up angle of first-person more engaging. Setting the game aside for now though, even if it might be funny to write "Played Daggerfall" under the heading and call it a day for the rest of the week. In terms of memory usage today, Dillo gave me a slight shock earlier when one of its processes (/usr/lib/dillo/dpi/https/https.filter.dpi) shot up to 250 MB from 3 MB on launch while downloading a file, but it cleared up shortly after. The browser does not support frames or Javascript, but starts up quickly and is a solid option for viewing pages with images in-place. ### Links => https://elderscrolls.bethesda.net/en/daggerfall The Elder Scrolls II: Daggerfall official page => https://www.dosbox.com/wiki/GAMES:The_Elder_Scrolls_II:_Daggerfall The Elder Scrolls II: Daggerfall on the DOSBox wiki => https://forums.launchbox-app.com/topic/41435-dosbox-freesize-not-working/ Additional instructions on a LaunchBox forum thread => https://www.dosbox.com/wiki/IMGMOUNT#Loading_a_CUE_image Loading a cue image ## Day 4 Otherwise known as the Day of Fail, in which someone who said earlier wrangling with Wine was not on the menu tried to run SimTower in Wine, with disastrous results. It seemed plausible enough at first:
mkdir /path/to/simtower
mount -t iso9660 -o loop /path/to/simtower.iso /path/to/mount/point
cp -r /path/to/mount/point /path/to/simtower
umount /path/to/mount/point
cd /path/to/simtower
wine setup.exe
Instead of a few hours of fun, it gave the following errors:
0084:err:vulkan:wine_vk_init Failed to load libvulkan.so.1.
010c:err:environ:init_peb starting L"C:\windows\syswow64\winevdm.exe" in experimental wow64 mode
010c:err:module:LdrInitializeThunk "krnl386.exe16" failed to initialize, aborting
010c:err:module:LdrInitializeThunk Initializing dlls for L"C:\windows\syswow64\winevdm.exe" failed, status c0000005
Adding the vulkan-loader package removed the libvulkan.so.1 error, but the errors with winevdm persisted. It appears that the 32-bit game had a 16-bit installer, like games sometimes did around the same period. The FAQ on the WineHQ website indicates Wine could run 16-bit code on a 64-bit OS, but the instruction to enable 16-bit code execution did not work in Alpine x86_64 on the latest LTS kernel:
echo 1 > /proc/sys/abi/ldt16
ash: can't create /proc/sys/abi/ldt16: nonexistent directory
My attempt to enable it on boot via an OpenRC init script not only did not work, it triggered another problem. On reboot, there were "Segmentation fault" errors during startup when the networking service attempted to connect to a wireless network. The wireless interface was up, but wpa_supplicant cannot authenticate successfully to the network, confirmed when manually running following command: `wpa_supplicant -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf`. What probably happened was some core packages were upgraded including musl, which caused incompatibilities with the linux-firmware and kernel packages. The obvious solution was to synchronise the package dependencies again by fetching the latest ones, but this was hampered by broken networking. If other methods fail, the netbook has an Ethernet port and setting up a networking profile for it should work, though it is a hassle all the same. Fortunately, there was a backup installation on hand that can be used to log into and restore the broken system. This is the second time in the past 1.5 months that something broke while installing new packages, necessitating running `apk fix` or similar on groups of packages. Sure, it is edge/testing and instability should be expected. The edge repos contain a number of applications that are unavailable on stable releases, and might not be for some time. The broken install included an encrypted root filesystem (boot and swap are regular partitions), so an additional step is needed to unlock the root partition. Once logged into the filesystem, run `apk upgrade` with toes crossed and hope it fixes the issue:
-- Get the device ID in the form of /dev/sdX, or /dev/sdc in this case
blkid
-- Unlock the LUKS volume and register it at /dev/mapper/sdc3
cryptsetup luksOpen /dev/sdc3 sdc3
-- Mount the LUKS volume to a directory
-- This step is required or the mount command will return an error
-- "mount: /dev/mapper/sdc3/proc: mount point is not a directory."
mount /dev/mapper/sdc3 /mnt
-- Also mount the /boot partition in case there is a kernel upgrade
-- that needs to write to the partition
mount /dev/sdc1 /mnt/boot
for i in /proc /sys /dev /dev/pts; do mount -o bind $i /mnt$i; done
-- Enter the broken installation
chroot /mnt
-- Upgrade all packages
apk upgrade -a
-- Exit and unmount
exit
for i in /proc /sys /dev /dev/pts /boot; do umount /mnt$i; done
umount /mnt
cryptsetup luksClose sdc3
It did not fix the issue. All packages are up to date, but wireless networking was still unable to authenticate with networks. Using `dd` to clone the backup installation to the USB stick also failed with an IO error after ~187 MB transferred, which might be due to the encrypted partition, as erasing and flashing an ISO worked. The backup will suffice for the rest of the week with the resource allocation adjusted, but there will be no installing of system packages because the current state of edge is unusable for me. Not sure yet about the next step. ### Links => https://en.wikipedia.org/wiki/Wine_(software)#Backward_compatibility Wikipedia entry about Wine's backward compatibility => https://wiki.winehq.org/FAQ#16-bit_applications_fail_to_start Allowing 16-bit applications at the kernel level => https://stackoverflow.com/a/21799844 Is it possible to run 16-bit code on a system with Intel IA-32e mode? => https://askubuntu.com/questions/136714/how-to-force-wine-into-acting-like-32-bit-windows-on-64-bit-ubuntu 32-bit Wine environment on a 64-bit OS ## Day 5 Started a new installation of Alpine Linux 3.18, with a base configuration running within an hour. Then, while installing the display server, this error appeared:
WARNING: libcdr-0.1.7-r11: unable to cache: Read-only file system
Segmentation fault
ERROR: Unable to lock database: Read-only file system
ERROR: Failed to open apk database: Read-only file system
This does not bode well for the USB media, which had only been in use for 2 months. It is the first time of issues with the brand of storage media. The `dd` command failure the previous day and also the networking segmentation fault may have been related. Sure enough, the next time the installation was rebooted, sysctl could not mount the unlocked filesystem and dropped into an emergency console. The USB stick was a write-off, pun intended. The next irrational thing to do was to pull out an older version from the same product line with 25% of the storage capacity of the previous one and reinstall again. => 2023-07-10-occ-2023/usb-media.jpg USB media Originally the image was to come from a USB webcam, but Cheese and guvcview could not find the device. The webcam showed up in `dmesg` but not identified properly there or in `lsusb`. It might be broken. Pulled out a Nintendo 2DS for a low-res image instead. Taking pictures with it can be a bit tricky initially, because the viewfinder displayed on the top screen does not match the image output — the view appears more zoomed in and horizontally off-center to the JPG result. This is due at least in part to the camera taking stereoscopic 3D images (even though the 2DS model cannot display them in 3D), with 2 lenses offset left and right of the center: => 2023-07-10-occ-2023/n150.jpg Original JPG => 2023-07-10-occ-2023/n150-left.jpg Left image => 2023-07-10-occ-2023/n150-right.jpg Right image The camera saves a JPG and MPO file for each photo taken, the latter of which contains the image objects viewable in stereoscopic 3D on a 3DS or some other viewer. The pair of images can also be combined side by side to emulate a 3D effect when viewed with a headset similar to the Google Cardboard, or to generate an anaglyph for red/blue 3D glasses. Here is an example using Exiftool and GraphicsMagick:
-- Extract the left and right images from the MPO
exiftool -b -mpimage2 input.mpo > left.jpg
exiftool -trailer:all= input.mpo -o right.jpg
-- Generate a side-by-side (SBS) image
gm convert +append left.jpg right.jpg sbs.jpg
-- Generate an anaglyph image
gm composite left.jpg right.jpg -stereo anaglyph.jpg
=> 2023-07-10-occ-2023/n150-sbs.jpg SBS stereoscopic image => 2023-07-10-occ-2023/n150-anaglyph.jpg Anaglyph => 2023-07-10-occ-2023/n150.mpo MPO viewable on Nintendo 3DS The stereoscopic images were a tangent, though par for the course with the way things panned out in the past few days. Memory use today is ~90-141 MB, on the higher end after reverting to sakura for terminal emulator, which has better text selection than xterm. ### Links => https://softwarerecs.stackexchange.com/a/77394 Open source 3D image viewer for Multi Picture Object files in Linux => https://stackoverflow.com/questions/20737061/merge-images-side-by-side-horizontally Merge images side by side horizontally ## Day 6 Low activity today, but finally got around to fixing a bug that caused post titles to be truncated on the list of log posts. It is less an actual fix than simply removing an extraneous if/else block in a string parsing function. The (Very hacky) script used to generate the post list and rss feed can be found here: => https://git.tilde.town/mio/gemwriter Gemwriter Memory use today hovers at 149 MB, with the top 3 memory-consuming processes being w3m, sakura and Xorg server. ## Day 7 Spent some time updating a shared zettelkasten with a few tips learned during OCC. (A zettelkasten is a collection of atomic entries on a range of topics, with lots of interlinking among entries.) Merging changes into the zettelkasten repo using the git host web UI in Luakit was almost painfully slow with the single tab open, but each page did load eventually after 1-3 minutes — log in, make pull request from a branch comparison page, submit pull request form, approve merge request and the subsequent merge successful confirmation message. A 5-minute task that took ~15 minutes. Resource usage went up to 240 MB RAM and ~2.41 load with a password manager included in the mix. The RAM usage calculation seemed a bit low, but htop and free both reported the same numbers. Swap stood at 80 MB. Technically the changes could be pushed straight up from the git CLI with maintainer permissions to the repo, it is more a test to see what works best for merge etiquette. Next, I tried to watch a Peertube video. The video did not load on the page in Luakit, which may simply be due to plugins being disabled in the Luakit user config, but after waiting ~6 minutes for the page then the download modal to appear, it was possible to copy one of the download links and stream it with mpv. The 1080p and 480p versions were viewable, confirming at the same time that Pulseaudio was working. Memory usage was reported as 59.3% of the total 279 MB and the CPU fluctuated between 40-50% after streaming for 10 minutes for the 480p version, while the 1080p used 80-90% of the CPU and climbed to 247 MB within the first minute. There was an warning about A/V desynchronisation, but it was unnoticeable to me in the video, which did not contain lip movement. The video seemed to stutter once or twice while playing in the background during multitasking, e.g. writing in vim, but that it could play 1080p video on an Atom core is still good news. Also went back to DOSBox to check whether sounds were enabled, with mixed results. In SimCity 2000, the music was reduced to staccato notes that did not resemble the original tracks, but the sound effects were fine. In two other games tested, the both music and sfx worked perfectly, so the DOSBox settings probably just needed some fiddling for that particular game. ## Conclusion It has been a fun week, though the experience would be better without the storage media failure mid-week. Oddly, the older 8 GB USB stick ("[model-name] 2.0") feels faster than the 32 GB ("[model-name] 3.0"). Installing a large package causes occasional 5-10 seconds of lag, but does not grind the entire system to a halt for minutes at a time, for example. One point which may turn out to be irrelevant: previous USB sticks were plugged into the single USB port on the left side of the netbook, which can be used to charge devices and runs warmer than the other two ports on the right side. The 8 GB one was subsequently attached to one of the right side ports. Maybe the stick suffered from overheating, though another sticks plugged into the same port for longer did not appear to be affected. Unlike the previous year, there was no concerted goal to start — the machine was used for common computing activities like web browsing, plaintext editing, viewing images and PDF documents, and checking the fediverse. There was also a bit of light compiling of a Lua script to a C executable, playing a handful of old DOS games, but those are hardly esoteric endeavours. As expected, browsing the modern Javascript-heavy http web was a hassle, though it was still a little surprising how sluggish the browser interface itself became, not just the page rendering. While not having a slick appearance, Dillo offers a pleasant interpretation of what the mainstream web could have been — fast, resource-efficient, content-focused. One thing I did not get to doing (besides trying to run SimTower, again, under a 32-bit OS) was a sketch in Inkscape, but the application does run, with the minor UI issue of a few windows being too tall. Resizing with Alt + right-click and dragging does not always shrink the window enough for the affirmative and cancel buttons to show up at the bottom of the screen. Overall, while the netbook is not very outstanding compared with the contemporaries of its time, such as the EeePCs, in terms of battery life or other specs, it is decent for general computing. The matte display is great for using the netbook near bright overhead lights or sunlight. 1-2 reviews about the netbook complained about the cramped keyboard and the difficulty of typing accurately on it. The keyboard was all right to me, being used to flat and compact keyboards, as well as not having to press hard for keys to register. If there is only one takeaway from this, it is backups. And to get better USB sticks. ]]>gemini://tilde.town/~mio/log/2022-07-21-occ-2022.gmi 2022-07-21T20:52:00Z mio https://dataswamp.org/~solene/2021-07-07-old-computer-challenge.html Old Computer Challenge => http://lambdacreate.com/posts/26 wsinatra's blog post at lambdacreate => https://dataswamp.org/~solene/2022-07-01-oldcomputerchallenge-v2-rtc.html this year's challenge => https://github.com/nikolas-n/GNU-Linux-on-Asus-C201-Chromebook list of distros known working on the Asus C201 Chromebook ## Day 1 * Internet time: 45m * 20m - looked up resource limit settings, added system-wide fonts * 10m - downloaded podcasts from a mobile device * 10m - checked mail, prefetched RSS feeds, checked IRC * 5m - checked local fediverse * RAM: 175M * CPU: 0.30 Since it was my first time doing the challenge, one initial thing to do was to take stock of current resource usage. According to `htop`, the most memory-consuming applications were: * Graphical web browser (114M - 733M) - cut out 733M by closing the browser with 10-15 tabs open. At least it was still usable with 1 tab open, ~114M with a static page and no Javascript. * RSS feed reader (22M - 200M) - 80M a 150M db with entries accumulated over a few years and about 30 feeds, which after a few days open became 200M. 22M for a 6M db with the same number of feeds after moving the sqlite db and letting it recreate a new one. * Display server and window manager (65M - 98M) - 98M with a few GUI apps open, dropped to 65M with only the terminal emulator open. * IRC client (28M - 61M) - 61M with a Matrix plugin enabled, 28M without the plugin. The only other working CLI Matrix client found that supported end-to-end encryption does not yet support Space rooms and had a few cosmetic bugs. Running the flagship web UI client in a browser is not an option for the experiment, too resource-heavy. Skipped the Matrix plugin. * Password manager (60M) - replaceable with a CLI version, but closed for now when not in use. * PDF viewer (58M) - to be supplemented with a CLI version when reading text-only files. * Clients that were usually offloaded to a SSH server: mail, RSS, IRC and Mastodon. Of those, RSS and Mastodon will be added to the computer to be counted in the baseline RAM usage. Baseline RAM usage was in the range of 146M - 181M using mainly CLI applications: * A multiplexer with a file manager, text editor, clients for Gemini, HTTPS web, RSS and Mastodon. * A few utilities, e.g. SSH, NetworkManager, language input bus, process monitor. Including mail and IRC clients would increase the baseline usage by 40M (12M and 28M respectively) to around 221M, with all the applications mentioned running in the background. Given most of the clients require an internet connection to update content and the online time limit, a few like the RSS and Mastodon clients I might check once daily then close them when not in use to recover some RAM. Multiplexer sessions can group applications together and be re-launched with one command. CPU-wise, the list was shorter, mostly because the system had no video acceleration (which should be fixable by custom packaging video drivers), so even when I had time to play video games, the majority of modern titles wouldn't run well on it. (Text adventures were fine though, as was retro games emulation.) Applications that tapped noticeably into CPU were: * VLC (3.0-4.0) - consistently high load only when watching or decoding videos, using the CLI command to stream audio doesn't take much resources. Before the challenge, I already was streaming media mostly from mobile for a while, not minding the smaller screen, and didn't watch a lot of videos anyway. * Web browser (0.2-1.2) - unsurprisingly, the web browser was also prone to CPU spikes, triggered by certain websites and consistently on pages with animated images. The simplest response was to avoid sites with known issues for the browser or to check them in a text-based browser like Lynx. Typically my internet time might begin with checking mail, RSS, IRC and sometimes the fediverse, the latter I could check a little more often after finding a TUI client that I could keep open for longer stretches. Due to the time limit, I switched to downloading podcasts in the background for later instead of streaming. Spent significantly less time in IRC, from roughly 1-2h to 5m. The client was left open and connected on a server, acting like a bouncer to save conversations in scrollback. Only logged the time actually reading and replying. Also had less time on the fediverse relative to other networks such as IRC. I like the idea of a decentralised social network, unfortunately I'm also very selective about what kind of posts I'd like to read or follow. It's also easier to have casual conversations on IRC in real-time that switch topics or go on at some length, and not worry about potentially overloading other people's timelines. Couldn't access the official Forth website from Lynx, got a `403 Forbidden` error. The web server probably misidentified it as a bot. Not a problem to load it in a GUI browser quickly to download the PDF book for offline reading, but did wonder how many other websites have a similar block. ## Day 2 * Internet time: 1h * 5m - downloaded podcasts * 5m - checked mail, prefetched RSS feeds and local fediverse timeline * 50m - checked IRC * RAM: 130M - 275M * CPU: 0.2 - 1.0 Easily spent most of the internet hour conversing with people on IRC. One of the servers I lurk in, affectionately known as "casa" among the regulars, is typically quieter on weekends and livens up with banter during weekdays. In leaving after time was up I had to cut a conversation short and felt badly about it, as it wasn't compulsory and more an arbitrary personal choice stemming from the challenge. Decided to add IRC (currently connected over SSH anyway) as part of the messaging exemptions to the internet time limit, to take effect on day 3. For the remainder of the day, I wanted to see what else I would miss without internet access. It's great being able to connect with people anytime despite distance and timezones, and have all sorts of interesting, funny and productive conversations. Being on IRC with a friendly, mutually supportive crowd has a positive effect on my day and was an aspect of internet connectivity I'd like to keep. Would have to look for a way to download Gemini sites tomorrow. Ran out of internet time and wanted to download the Braxon stories by Joneworlds to continue reading offline after following the series over multiple episodes of the Tilde Whirl podcast. Another side effect of using up the internet time for the day was turning to other activities I had wanted to try for a while. Played a little solo tabletop tea shop sim called Whisling Wolf Café with the instructions PDF open on the screen and an Android app for dice rolls. The description estimated gameplay to be 10-20 minutes, though my first full game was 1.5 hours. It's easy to play, with short rounds that make it similarly easy to pause and resume. => https://nightfall.city/x/republic.circumlunar.space/users/joneworlds/index.gmi Braxon => https://tilde.town/~dozens/podcast/ Tilde Whirl podcast => https://luckynewtgames.itch.io/whistling-wolf-cafe Whistling Wolf Café ## Day 3 * Internet time: 50m * 10m - downloaded podcasts, mail and RSS * 30m - looked into saving Gemini content, downloaded PDF viewers * 10m - looked up tabletop games licensed under CC-BY/CC-BY-SA * RAM: 146M - 454M * CPU: 0.20 - 0.83 Went looking for a download manager for the Gemini protocol and didn't find a suitable utility from the Gemini software list. The closest thing was gemini-fetch, which was more a library than a downloader like wget. One possibility was to use wget to fetch files from a Gemini proxy, but it was not as straightforward as pointing wget to a subdirectory under the proxy url (it reported the error `disallowed by robots.txt`, the `-e robots=off` flag didn't work). It might work by taking the list of URLs indexed by wget from the log output and then pass them back to wget inside a shell script loop to get each page separately. Fortunately in the case of Braxon, the author included an ebook of all the journal entries to date in a gopherhole, which could be downloaded from Lynx and saved the trouble of parsing the wget log. For viewing ebooks, I had been using a local build of Bookworm. The interface for managing its book collection is a bit buggy, but the viewer does work. A light option, if lack of formatting would be tolerable, was to convert to plain text, save or pipe it to a pager like `less` for reading:
epub2txt file.epub - | less
epub2txt file.epub > file.txt
The same idea could be used for text-only PDF files using pdftotext: `pdftotext file.pdf - | less`. However, I regularly browse PDFs that are a mix or entirely composed of images, which text conversion doesn't handle. In search of a lighter PDF viewer, I tried a few different applications with a 2-page text-only PDF and a 9-page image-based PDF, just two examples of files I might typically open. These were: * CorePDF - 40M to open both files. It sat between Zathura and Evince for number of features. * Evince - 40M and 61M. By far the most featured of the viewers and one I've used for a few years, but it uses a little more memory. There is also a thumbnailer utility invoked by some file managers to generate PDF thumbnails, which was useful to have until it spiked in CPU usage to 100% handling files larger than about 80M. * mupdf - 8M and 24M. (On Alpine, install the `mupdf-x11` package for the executable.) Navigating between pages seemed a bit smoother than CorePDF. Dragging with the right mouse button held down selected text to copy to the clipboard. It also happened to load the EPUB format, which was a welcome surprise to go with the ebook of stories. However, I later found it used more RAM with larger files — 156M for a 300-page ebook compared to 57M in Evince. * Zathura - 25M, 30M-37M depending on the plugin used. Zathura supported viewing PDFs through two plugins, `zathura-pdf-poppler` or `zathura-pdf-mupdf`. Dragging with the left mouse button held down selected text but copied to the primary selection rather than the shared clipboard. Not sure if there's a way to configure it. Bookworm also supported PDFs and the RAM usage is close to Evince, but because it automatically added any opened files to its collection (regardless of whether it could actually render it), I preferred a separate PDF viewer. => https://gemini.circumlunar.space/software/ Gemini software list => https://github.com/RangerMauve/gemini-fetch gemini-fetch => https://stackoverflow.com/questions/3570591/cli-pdf-viewer-for-linux using pdftotext ## Day 4 * Internet time: 35m * 10m - downloaded podcasts, mail, RSS and fediverse timeline * 25m - looked up tabletop games under open game licensing * RAM: 153M - 538M * CPU: 0.10 - 1.39 Used the GUI web browser a bit during a search and got up to 4 tabs open with static pages (about 329M) before exceeding resource limits and had to close additional tabs. The browser had a keybinding configured to save sessions, which can be restored if the browser abruptly closed or were terminated by earlyoom. Another category of applications I should probably check are graphics programs such as Inkscape. Previously had the application's memory usage shot up to 511M with a file open for 1-2 days. After making a simple cover design with it for a few hours, it gradually inched up to 196M from 114M with a blank document, which fortunately was still usable. Online searches felt slower to complete while checking the clock frequently to pace in the internet time available. For example, looking into the topic of tabletop games with open game licensing has taken two days so far with leads but somewhat scattered results. Following links took time and was more cumbersome to do with only 4 browser tabs open. Also reserved some internet minutes in case I needed access for something important, but was too tired by the end of the day to make use of the remaining time. ## Day 5 * Internet time: 55m * 10m - downloaded podcasts, mail, RSS and fediverse timeline * 10m - browsed links from the fediverse * 35m - looked up a few Gemini clients, browsed Gemini capsules * RAM: 248M * CPU: 0.17 On the quest for a smol web client. Among the GUI browsers were Lagrange and Castor, which used 73M and 19M respectively with 1 window open. Both were good options visually, but I preferred a client with some keyboard operability. Of the CLI options, I liked Amfora's interface, with colours and tabs. The only drawback with its tabs was only the right-most tab could be closed currently, which was a bit annoying. Bombadillo had a webmode for http/https (disabled by default) that could make navigating between protocols more seamless. RAM usage was moderate, 30M (amfora) and 38M (bombadillo). Also wanted to try Asuka, but it was unavailable in the Alpine repos and I didn't get to packaging it locally. Read *Braxon* and a few other stories at Joneworlds. Just noticed I didn't know how to select and copy text in an EPUB file within mupdf, right-click dragging as in PDFs didn't work. => https://github.com/makeworld-the-better-one/amfora Amfora => http://bombadillo.colorfield.space/ Bombadillo => https://git.sr.ht/~julienxx/asuka Asuka ## Day 6 * Internet time: 1h * 5m - downloaded podcasts, mail, RSS and fediverse timeline * 35m - looked up licenses for tabletop game titles, checked a Gemini message board * 20m - browsed Gemini capsules * RAM: 251M - 379M * CPU: 0.22 Followed the trail recommended by geminiquicksta.rt and found a link-aggregating message board. My initial impression of the smol web is of an ecosystem where people can focus on telling stories, reading and communicating without a load of elements all vying for attention at once and persistent tracking. There are still pockets of the HTTPS web that fill a similar role, but increasingly they seem to be a smaller part of a web dominated by large silos. For browsing in general, 20-30m time segments worked better for me, which provided time to do longer searches before moving to other tasks (context switching might take a bit of time). Also began reading *Starting Forth* with an interpreter open beside the book to try the examples in it. => https://geminiquickst.art/ geminiquickst.art => gemini://geddit.glv.one/ Gemini message board ## Day 7 * Internet time: 50m * 5m - downloaded podcasts, mail, RSS and fediverse timeline * 45m - looked up tabletop game licenses * RAM: 257M - 573M * CPU: 0.16 - 1.72 Exceeded resource limits today. I forgot to launch the browser with Javascript disabled and usage spiked rapidly with 4 tabs open and the other CLI applications running in the background. Quickly got usage stats back within range again and was allowed 2 tabs with Javascript enabled. There are GUI browsers that use less memory, at the cost of pages not rendering fully and some basic interaction elements not working at all, or combined with crashes and instability. It's a bit like an internet kiosk except it only has one user. Trying to rein in my sarcasm here. ## Day 8 * Internet time: 50m * 5m - downloaded podcasts, mail, RSS and fediverse timeline * 30m - searched for a tabbed Gemini browser, tested another CLI browser * 10m - fetched dependencies to compile a package * 5m - looked up command flag options * RAM: 208M - 343M * CPU: 0.27 - 1.16 A bit sad that I couldn't play in LeoCAD, a toy bricks CAD program — RAM use was a manageable 117M, but it would emit 348% CPU bursts when dragging parts from the parts selection window to the model view. It might work if I could hard-cap the CPU to 1.0. As it were, it would be too much like cheating. While looking through the list of Gemini clients on the official Gemini website, I came across Fafi and tried to compile it again. In the previous attempt the version of racket in the repos was too old (7.x) and according to one of the issue reports, the application needed racket >= 8.2. When the resulting executable ran, it would shortly exit with an error like this:
class*: superclass does not provide an expected method for override
override name: on-close-request
class name: custom-tab-panel%
In the meantime, racket had since been updated to 8.5 in the repos, so I retried a simple APKBUILD I had prepared earlier. Initially got an error, which may have been due to process termination for running out of memory:
raco setup: making: /compiler-lib/compiler/commands
raco setup: in /compiler-lib/compiler/commands
raco setup: in /compiler-lib/compiler/private
SIGSEGV MAPERR si_code 1 fault on addr 0x207
Aborted (core dumped)
Re-ran `abuild -r` and got a different error:
Linking current directory as a package
Compiling bytecode... done.
Building executable...find-exe: can't find GRacket executable for variant 3m
This issue seemed to be related to the racket compiler, not just Fafi. A workaround was to do:
raco pkg install
raco exe --3m main.rkt
This was probably missing optimisations or other things, but the resulting binary worked. Wasn't keen about the 212M it used with only 1 window open, but it looked very nice and a healthy ecosystem could use more choices. TIL w3m has buffers. This was very relevant to my search for a usable web browser that had multiple tabs/views. Would definitely take a closer look at w3m in the coming days. With internet time almost up for the day, played another solo tabletop game, this time A Day at the Crystal Market. Recently I've been looking at exploratory tabletop games that don't require a lot of materials to play (instructions, maybe a deck of playing cards and 1-2 d6). Hadn't been interested in tabletop games before, but seeing some of the smaller indie games helped me appreciate the wide variety of things that can be done with the genre beyond Dungeons & Dragons and rogue-like dungeon-trawling games. => https://www.leocad.org/ LeoCAD => https://andregarzia.com/2020/08/fafi-browser-a-racket-based-gemini-client.html Fafi => https://todo.sr.ht/~soapdog/racket-gemini/7 racket-gemini issue #7 => https://github.com/racket/racket/issues/3969 racket issue #3869 => https://oakenboro.itch.io/a-day-at-the-crystal-market A Day at the Crystal Market ## Conclusion To revisit the questions I had on day 1: **What kind of internet experiences would I like to have?** At this point in time, I look for experiences more focused on people and expression. From the start, I wanted to set aside more time to explore the smol web because it's creative and interesting in their own ways, and not only as a refuge from the deteriorating usability of the mainstream web. Didn't get in as much Gopher/Gemini browsing as I'd like, with what should have been simple web searches having taken up a portion of the allotted hours. IRC would have filled up much of the allotted time (pleasantly) had I not caved to an exemption. A time limit certainly made me consider where or how to put time towards things I enjoy. Given the fairly small file sizes of many pages on the smol web, I think there was a missed opportunity for having a Gopher/Gemini application that could index and cache links up to 2-3 hops away, and be able to browse them offline later. It might already be possible with existing clients and I didn't know it then. Better preparation next time. **What, if anything, did I miss by computing with a predominantly terminal interface?** The main thing was viewing images, e.g. media attachments in fediverse timelines. Previously in tut, image attachments can be viewed with the default GUI image viewer via xdg-open, but I switched to toot for a while because tut stopped loading a significant chunk of thread replies, and lost poll voting and viewing media (faster than loading the toot URL in a GUI web browser) in the move. This being one of various inconveniences I'd like to rectify. Related to this was an undercurrent of slight dissatisfaction with the configuration, whereby either I haven't found the most suitable application with the features I'd like (while still being fairly light!), or the settings and hacks to have things work as desired. That being said, it still looks like a good direction, and re-discovering w3m makes me more optimistic about sorting out the rough edges eventually. In the past few days since the challenge, I adjusted the w3m configuration and with some warming up on the key bindings, it has improved my web browsing and reading enough to use concurrently with GUI browsers. **Were there applications or activities I would have liked to do that were infeasible due to the resource limits?** * 3D toy brick modelling, unless I could find another compatible open source application (CPU constraint) * Playing some modern games (RAM constraint and lack of dedicated GPU) * Compiling packages is still possible in some cases RAM permitting, but would take much longer than it currently does given weaker ARM processor * Watching videos at a quality frame rate, and encoding videos (I don't do much of either at the moment so it's fine for now) Overall, it was a mildly unpleasant week for someone used to accessing the internet in short bursts anytime throughout the day to look up one thing or another and subsequently had to mentally plan ahead briefly to maximise the time blocks. However, it was also not a hard time, as there were plenty of other things I could do that didn't require an active internet connection. I wouldn't want to do this every day, but a week is roughly enough time to begin seeing patterns, including what worked and what didn't work so well, for future reference. ]]>gemini://tilde.town/~mio/log/2022-03-26-tmux-mail-indicator.gmi 2024-05-09T00:36:30Z mio MAIL_ICON=" #(cat /var/spool/mail/$USER | grep ' ' && echo '✉') "
set -g status-right "$MAIL_ICON"
set -g status-interval 60
=> 2022-03-26-tmux-mail-indicator/tmux-mail-indicator.png tmux status bar --- My town tmux session usually includes chat and 1-2 other applications open. However, writing town mail is not a regular activity for me (unless someone has already started a conversation with the likelihood of a reply), which often means for weeks or months forgetting to check my inbox. In this situation, it is helpful to have a mail indicator to alert me whenever there is a new message. The main piece to add to your tmux configuration (by default `~/.tmux.conf`) is:MAIL_ICON=" #(cat /var/spool/mail/$USER | grep ' ' && echo '✉') "
This line checks the user mail spool and outputs a little envelope icon if it finds a space character in the file contents (a properly formatted mail message includes headers and plenty of space characters). When you open Alpine or fetch mail from another client, the mail client will copy all messages to the inbox and the mail file will be empty again. You can set it to display on the left or right side of the status bar: `set -g status-left "$MAIL_ICON"` or `set -g status-right "$MAIL_ICON"`. `set -g status-interval [seconds]` determines the refresh rate, including any other system information indicators in the bar. My `status-interval` is set to 60 seconds, which goes well with a clock displayed in [hours]:[minutes] in my status bar, and my disk usage does not change significantly to warrant more frequent monitoring. If you come upon this post and the indicator does not work, the most likely explanation is the mail file location is incorrect or has changed. In some systems it might be `/var/mail/$USER` or a custom `$MAIL` variable. Check with your friendly local admin for the correct path. The variants below did *not* work.MAIL_ICON=" #(cat $HOME/mbox | grep 'Status: O' && echo '✉') "
This looks in Alpine's mbox for unread messages. Alpine has to fetch the mail and add it to the mbox, and if Alpine isn't running, the mbox won't be updated.MAIL_ICON=" #(has_mail=
cat /var/spool/mail/$USER
; test -z $has_mail || echo '✉') "This is valid Bash, maybe tmux did not like custom variables. Also tried rephrasing it with the `if` conditional instead of custom variables, but no go either. It should work if it were moved to its own script file and called from within the tmux config, but there is no need to add another file if there is a simpler solution using only the tmux config. ]]>gemini://tilde.town/~mio/log/2021-11-27-kvm.gmi 2021-11-27T16:09:00Z mio *Add ISO* and paste the ISO link to the Virtual image * *List VPS* > *Manage* (next to the vps name) > *Settings* > *VPS Configuration* * Temporarily change the boot order: *1) CD Drive 2) Hard Disk* * Select ISO: (select the custom user ISO) * Save and reboot the VPS * Log into the VPS via VNC. Most control panels have a VNC address for external clients and/or a built-in web VNC client for system recovery. * Get the CD drive and disk device names: `df -h` * The CD drive usually has a name like `/dev/sr0` * Example of disk device name: `/dev/vda` * Unmount the vda device: `umount /media/vda` * This should be done before proceeding with the setup script, or it will fail during the disk partitioning with warnings about partitions not being found. * Run the setup script: `setup-alpine` * Example configuration for network setup: * Available interface: eth0 * Ip address for eth0: (IPv4 address from ifconfig/ip link output) * Netmask: (from ifconfig/ip link, e.g. 255.255.255.0) * Gateway: (often .0 or .1 in the IPv4 address subnet, e.g. xxx.xxx.xxx.1) * DNS nameserver(s): (enter IPs separated by commas) * Make sure to get the network connection working or the disk setup will fail to download the utilities needed to partition the disk. If it is unable to update the repo list, it's very likely there is something wrong with the network setup — ctrl+c to exit the script and re-run the script while checking the connection details carefully. * The `setup-disk` part of the script will create 3 partitions: boot (~105 MB), swap (~4173 MB) and root (fills the remaining space). * Before rebooting the VPS, go back to the VPS control panel, revert the boot order back to: *1) Hard Disk 2) CD Drive*, set the ISO to *none* if you don't plan to reinstall again soon and save. Reboot from the control panel, as the VM may still be running on the previous boot order. ## Additional setup * Edit the list of repos: * `vi /etc/apk/repositories` * Remove the CD drive entry `/media/cdrom/apks` * Optionally uncomment the community repo and/or the edge repo URLs if you want to pull from the testing repos * Refresh the repo index: `apk update` * Install some essential packages, e.g. a different shell or editor: * `apk add bash doas vim` * Add a new standard user and include a SSH public key for login. * `adduser $user` * From the ssh client: `ssh-copy-id -i ~/path/to/ssh/key.pub $user@$ip` * **Important:** disable SSH root login and password authentication. * `vi /etc/ssh/sshd_config` * `PermitRootLogin no` * `PasswordAuthentication no` * Restart sshd: `rc-service sshd restart` ## Firewall setup If your VPS doesn't have IPv6, you can skip the `ip6` parts. * Load the iptables modules: * `modprobe ip_tables` * `modprobe ip6_tables` * Install the packages: `apk add iptables ip6tables` * Set services to start automatically at default runlevel: * `rc-update add iptables` * `rc-update add ip6tables` * Add a few simple firewall rules for IPv4, using the example to add more ports as needed: `nano /etc/iptables/ipv4.rules`
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
* Repeat for IPv6: `nano /etc/iptables/ipv6.rules`
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT
-A FORWARD -j REJECT
COMMIT
* Apply the rules: * `iptables-restore < /etc/iptables/ipv4.rules` * `ip6tables-restore < /etc/iptables/ipv6.rules` * `/etc/iptables/rules-save` * `/etc/ip6tables/rules-save` * To see currently active rules, use `iptables -L` and `ip6tables -L`. * Start the firewall: * `rc-service iptables start` * `rc-service ip6tables start` ## Troubleshooting ### Changing the network connection details * eth0 configuration: `vi /etc/network/interfaces`
iface eth0
inet static
address $address
netmask 255.255.255.0
gateway $gateway
* Edit nameservers: `vi /etc/resolv.conf`
nameserver $dns1
nameserver $dns2
application/xml
This content has been proxied by September (3851b).