Tux Machines
Posted by Roy Schestowitz on May 14, 2023
=> Devices: Raspberry Pi, Orange Pi, Jetson, and More | Security Leftovers
=> ↺ The paradox of ZFS ARC non-growth and ARC hit rates
We have one ZFS fileserver that sometimes spends quite a while (many hours) with a shrunken ARC size, one tens of gigabytes below its (shrunken) ARC target size. Despite that, its ARC hit rate is still really high. Well, actually, that's not surprising; that's kind of a paradox of ARC growth (for both actual size and target size). This is because the combination of two obvious things: the ARC only grows when it needs to, and a high ARC hit rate means that the ARC isn't seeing much need to grow. More specifically, for reads the ARC only grows when there is a read ARC miss. If your ARC target size is 90 GB, your current ARC size is 40 GB, and your ARC hit rate is 100%, it doesn't matter than you have 50 GB of spare RAM, because the ARC has pretty much nothing to put in it.
=> ↺ 50 years in filesystems: 1974
Progress is sometimes hard to see, especially when you have been part of it or otherwise lived through it. Often, it is easier to see if you compare modern educational material, and the problems discussed with older material. And then look for the research papers and sources that fueled the change.
In Linux and Unix in general), this is easy.
=> ↺ 50 years in filesystems: 1984
The original Unix filesystem was doing well, but also had a large number of obvious problems. BSD Unix undertook an effort to fix them, and this is documented in the book “The Design and Implementation of the 4.3BSD UNIX Operating System ” by Leffler, McKusick et. al.
A more concise, but also more academic discussion can be found in the classic 1984 paper A Fast File System for UNIX , which lists Marshall McKusick, Bill Joy (then at Sun), Samuel Leffler (then at LucasFilm) and Robert Fabry as authors. The paper promises a reimplementation of the Unix filesystem for higher throughput, better allocation and better locality of reference.
=> ↺ 50 years in filesystems: 1994
In 1994, the paper Scalability in the XFS File System saw publication. Computers got faster since 1984, and so did storage. Notably, we are now seeing boxes with multiple CPUs, and with storage reaching into the Terabytes. The improvements to the 4.3BSD fast filing system (or the modified version in SGI IRIX called EFS) were no longer sufficient.
SGIs benchmarks cite machines that had large backplanes with many controllers (one benchmark cites a box with 20 SCSI controllers), many disks (three-digit-numbers of hard drives), and many CPUs (the benchmarks quote 12 socket machines) with a lot of memory (up to one gigabyte quoted in the benchmarks).
Filesystems became larger than FFS could handle, files became larger than FFS could handle, the number of files per directory led to large lookup times, central data structures such as allocation bitmaps did no longer scale, and global locks made concurrent access to the file system with many CPUs inefficient. SGI set out to design a fundamentally different filesystem.
Also, the Unix community as a whole was challenged by Cutler and Custer, who showed with NTFS for Windows NT 4.0 what was possible if you redesign from scratch.
=> ↺ The new .zip TLD is going to cause some problems
So what happens when things which are not domain names look like they are domain names? I've been worrying about this for a few years: [...]
=> gemini.tuxmachines.org This content has been proxied by September (ba2dc).Proxy Information
text/gemini;lang=en-GB