Ughhh I really need to write a proper APFS driver for Linux that's not trash, it'd be such a good rootfs
=> More informations about this toot | More toots from Lunaphied@tech.lgbt
@Lunaphied hm yeah given that:
yeah
=> More informations about this toot | More toots from leftpaddotpy@hachyderm.io
@leftpaddotpy like, Apple handed us a good file system design for an SSD optimized COW file system with some useful and novel features that could be implemented and we're just. not taking advantage of that in favor of other options.
Besides ZFS is way overkill for a laptop or typical desktop. I don't even know where btrfs is at, it's reputation seems permanently in the trash and you'd think someone would've focused on fixing that and meanwhile I know literally nothing about bcachefs in practice, it feels like it came out of nowhere and tbh the name is bad because it sounds like it's entirely for caching and not suitable as a rootfs
=> More informations about this toot | More toots from Lunaphied@tech.lgbt
@Lunaphied @leftpaddotpy nobody focused on fixing btrfs' reputation i guess but the FS is very solid. decade old heresey is outdated mostly.
it has been the default on SUSE for close to 14 years and default on Fedora for 4 years, Facebook uses it in their deployments. you would think that they would have covered the most common use cases.
it can very much be considered mature and had a lot of work put into by different devs employed by big companies
btrfs just starts to stuck when you attempt to use some of its advanced more niche features. btrfs is great at detecting errors of all kinds and failing fast. so when you see that there is a mount issue, well then you are more likely to deal with an unrecoverable corruption caused by IO errors rather than a bug. best to then mount as ro and recover data.
things that work well in btrfs:
things that have some annoyances at minimum that you should avoid:
... am i forgetting sth?
Yeah the tooling is not on the same level as ZFS' for sure. Sometimes when you have an FS corrupted in a certain way, the tools just segfault.
I also feel like ZFS does odd things to circumvent OS functionality. Why do you need an LRU when you already have page cache, dnode cache etc?
Why do you need volume management when you have tools for that already built into Linux like md or dm (esp when used with LVM)?
=> More informations about this toot | More toots from tammeow@cute-spellcasting-ideas.eu
@tammeow @Lunaphied i mean, i don't think that lvm is good lol. so i would rather zfs's volume manager. but also a lot of this stuff was kept from solaris where it actually was the native thing.
quotas in btrfs being broken is rather alarming given how utterly fucked a btrfs system immediately becomes if it gets a full disk. you cannot even delete things. i did not enjoy recovering that.
=> More informations about this toot | More toots from leftpaddotpy@hachyderm.io
@leftpaddotpy @Lunaphied oh yeah. the kind of design dead lock where in order to delete data, you need to allocate a new metadata object heh.
in a talk or somewhere they said that to work around this they just increased the forced spare storage to avoid those situations in most cases. i remember running into that in 2013 or something.
=> More informations about this toot | More toots from tammeow@cute-spellcasting-ideas.eu
@tammeow @leftpaddotpy @Lunaphied I ran into it this year!
=> More informations about this toot | More toots from leah@blahaj.social
@leah @Lunaphied @leftpaddotpy out of curiosity, was / is the FS using space_cache=v1 or space_cache=v2? I am wondering if that influences the space reservation algo regarding this bug.
You can check this with mount.
=> More informations about this toot | More toots from tammeow@cute-spellcasting-ideas.eu
@tammeow @Lunaphied @leftpaddotpy iirc v1
=> More informations about this toot | More toots from leah@blahaj.social This content has been proxied by September (3851b).Proxy Information
text/gemini