hmm I see. and why do you want that? balancing storage usage between backup sites? one of them is too little for the whole pool?
for now I don’t have a better idea, sorry. maybe this is the second best time to think up a structure for the datasets, and move everything into it.
but if the reason is the latter, one backup site cant hold the whole pool, you may need to reorganize it again in the future. and that’s not an easy thing, because now you’ll have the same data (files of the same category) scattered around the FS tree even locally. maybe you could ease that with something like mergerfs, and having it write each file to the dataset with lower storage usage.
if you are ready to reorganize, think about what kinds (and subkinds) of files will you be likely to store in a larger amount, like media/video, media/image, and don’t forget to take advantage of per-dataset storage settings, like for compression, recordsize, maybe caching. not everything needs its own custom recordsize, but for contiguously read files a higher value might be better, also if its not too often accessed and want better compression ratio as compression (and checksumming!) happens per records. video is sometimes compressible, or rather some larger data blob inside the container
=> More informations about this toot | View the thread | More toots from ReversalHatchery@beehaw.org
=> View fmstrat@lemmy.nowsci.com profile
text/gemini
This content has been proxied by September (ba2dc).