Data HDD with SSD catch drive
https://lemm.ee/post/40718024
=> More informations about this toot | More toots from rambos@lemm.ee
qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)
=> More informations about this toot | More toots from Maxy@lemmy.blahaj.zone
But that would first download to SSD, then move to HDD and then become available (arr import) on jellyfin server, making it slower than not using SSD. Am I missing something?
=> More informations about this toot | More toots from rambos@lemm.ee
Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on?
Is that not what you asked for?
=> More informations about this toot | More toots from braindefragger@lemmy.world
Well yes, but I was hoping files can be available (imported to media server) before they are moved to HDD. Import is not possible from incomplete directory if I understood that correctly (*arr stack)
=> More informations about this toot | More toots from rambos@lemm.ee
You would have to add both directories to your library.
=> More informations about this toot | More toots from catloaf@lemm.ee
It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.
=> More informations about this toot | More toots from Maxy@lemmy.blahaj.zone
Thank you. The files I download are usually 5-30 GB size. I don’t want to max out my internet speed, I just want to get the files in media library ASAP after requesting download manually (happens maybe few times a week)
It makes sense, Ill test sequential and random write performance and maybe even test it since I have the hardware available.
At first I wasn’t aware that my speed is super low for HDD, therefore I was looking for some magic solution with SSD speeds and HDD storage that might not even exist. I have to do more testing for sure
=> More informations about this toot | More toots from rambos@lemm.ee
You can and Qbittorrent has this functionality built in. You set your in progress download folder to be the SSD then set the move when completed to your HDD.
As for the size, that would depend on how much you are downloading.
=> More informations about this toot | More toots from slazer2au@lemmy.world
But that would first download to SSD, then move to HDD and then become available (arr import) on jellyfin server, making it slower than not using SSD. Am I missing something?
=> More informations about this toot | More toots from rambos@lemm.ee
The biggest thing is you have changed a random write to a linear write, something HDDs are significantly better at. The torrent is downloading little pieces from all over the place, requiring the HDD to move it’s head all over the place to write them. But when simply copying off the ssd, it keeps the head in roughly one place and just writes lineally, utilizing it’s maximum write speed.
I would say try it out, see if it helps.
=> More informations about this toot | More toots from BombOmOm@lemmy.world
I might try that, thx
=> More informations about this toot | More toots from rambos@lemm.ee
Depends on the file system, I know for a fact that ZFS supports ssd caches (in the form of l2arc and slog) and I believe that lvm does something similar (although I’ve never used it).
As for the size, it really depends how big the downloads are if you’re not downloading the biggest 4k movies in existence then you should be fine with something reasonably small like a 250 or 500gb ssd (although I’d always recommend higher because of durability and speed)
=> More informations about this toot | More toots from fiddlesticks@lemmy.dbzer0.com
l2arc is a read cache. Slog only is for synchronous writes.
=> More informations about this toot | More toots from lemmylommy@lemmy.world
Welp, guess I should do my research next time. Thanks for the heads up.
=> More informations about this toot | More toots from fiddlesticks@lemmy.dbzer0.com
Thx. I use ext4 right now. I might consider reformating, but so many new words to reasearch before deciding that. I heard about ZFS, but not sure is that right for me since I only have 16 GB of RAM.
Downloads are 100-200 GB max, but less than 40 GB most of the time. I have 512 GB in use and 2TB SSD not in use, can swap them if needed
=> More informations about this toot | More toots from rambos@lemm.ee
Yes. It’s part of the application and well documented. What did you try and not work?
=> More informations about this toot | More toots from braindefragger@lemmy.world
Are you also talking about incomplete directory in qbit? Doesnt make it faster afaik, but I might be wrong. I havent tried anything yet, wanted to check is it something usual or not worth at all. Got zero experience with using SSD as catch drive, it just made sense to me
=> More informations about this toot | More toots from rambos@lemm.ee
Yes, if the temporary directory where the files are being downloaded (incomplete folder) is on the SSD, then it should be faster, especially if you’ve identified a cheap HDD as your bottleneck. Unless you are incorrect about the HDD being the bottleneck.
=> More informations about this toot | More toots from braindefragger@lemmy.world
Yeah it will be faster, but its extra step before the files get available on HDD.
Even if my HDD is super fast and healthy it would still be a bottleneck for 2Gbps fiber? Ill deffo play with HDD more to find max speeds, wasnt paying attention before because it felt normal to me
=> More informations about this toot | More toots from rambos@lemm.ee
I’m sorry, but I have no idea what you want. Best of luck.
=> More informations about this toot | More toots from braindefragger@lemmy.world
Yeah feels like that lol. Thx anyway, have a nice day dude
=> More informations about this toot | More toots from rambos@lemm.ee
this is just adding an extra step to the process before the file can be available to use. you’re just saving the copying to the HDD until the very end of the torrent.
what OP wants is to download the file to a SSD, be able to use it on the SSD for a time, and then have the file moved to spinning disk later when they don’t need to wait for it.
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Yeah, of course it is. Because that’s what OP asked for. I don’t see ( use it for a bit first and then automatically copy it over ).
I see:
=> More informations about this toot | More toots from braindefragger@lemmy.world
what is the point of faster download if you just have to do another entire copy after that?
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Ask OP
=> More informations about this toot | More toots from braindefragger@lemmy.world
or you could, you know, think about it for a second from their point of view. and they have already clarified this in other comments.
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Man you’re slow.
=> More informations about this toot | More toots from braindefragger@lemmy.world
wtf does raid have to do with anything here? yeah, sure, I’m the slow one.
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Wow. Good luck with life bro.
=> More informations about this toot | More toots from braindefragger@lemmy.world
the dude asks about SSD each for torrents and your multimillion-dollar answer is “raid”. lol
=> More informations about this toot | More toots from acosmichippo@lemmy.world
It seems that the commenter’s intention was clear to everyone except you. The commenter acknowledged the need for RAID software or a specific file system, mentioning that it had already been addressed. Understood the budget and OP being an newb.
Although their tone may have been blunt, they stayed focused on their original point.
But you just kept nagging. lol
Either way OP was helped and now you can sleep knowing you did your part.
=> More informations about this toot | More toots from zelifcam@lemmy.world
Yeah, I use the incomplete folder location as a cache drive for my downloads as well. works quite nicely. It also keeps the incomplete ISOs out of jellyfin until they're actually ready to watch, so, bonus.
If it's not going faster for you there's probably something else that's broke.
=> More informations about this toot | More toots from DaGeek247@fedia.io
It will download faster to SSD, but then I have to wait the files to be moved to HDD before getting them imported in media server. Im not after big numbers in qbit, I just want to start watching faster if possible. Sorry Im probably not explaining well and Im not sure if Im asking for something that even make sense
=> More informations about this toot | More toots from rambos@lemm.ee
qbittorrent moves the completed files to the assigned literally as soon as it is done.
=> More informations about this toot | More toots from DaGeek247@fedia.io
Im doing more research, but will defo test this
=> More informations about this toot | More toots from rambos@lemm.ee
but if the disk is actually bottlenecking at 40MB/s it will still take time to copy. That plus the initial download to SSD will just end up being more time than downloading to the spinning disk at 40MB/s in the first place.
=> More informations about this toot | More toots from acosmichippo@lemmy.world
I doubt the disk will bottleneck at 40mb/s when doing sequential write. Torrent downloads are usually heavy random writes, which is the worst you can do to a HDD.
=> More informations about this toot | More toots from theterrasque@infosec.pub
40MB/s is very very low even for a HDD. I would eventually debug why it’s that low.
Yes it’s possible. FS like zfs btrfs etc. support that.
=> More informations about this toot | More toots from ShortN0te@lemmy.ml
agreed, I think there is something else going on here. test the write speed with another application, I doubt the drive actually maxes out at 40MB/s unless it’s severely fragmented or failing.
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Its the cheapest drive I could find (refurbished seagate from amazon), I thought thats the reason for being slow, but wasnt aware its that low. Im also getting 25-40 MB/s (200-320 Mbps) when copying files from this drive over network. Streaming works great so its not too slow at all. Is there better way of debugging this? What speeds can I expect from good drive or best drive?
Ill research more about BTRFS and ZFS, thx
=> More informations about this toot | More toots from rambos@lemm.ee
can you copy files to it from another local disk?
=> More informations about this toot | More toots from acosmichippo@lemmy.world
Yeah, but need to figure out how to see transfer speed using ssh. Sorry noob here :)
=> More informations about this toot | More toots from rambos@lemm.ee
If you use scp (cp over ssh) you should see the transfer speed.
=> More informations about this toot | More toots from not_fond_of_reddit@lemm.ee
I have managed to copy with rsync and getting 180 MB/s. I guess my initial assumption was wrong, HDD is obviously not bottleneck here, it can get close to ISP speed. Thank you for pointing this out, Ill do more testing these days. Im kinda shocked because I never knew HDD can be that fast. Gonna reread all the comments as well
=> More informations about this toot | More toots from rambos@lemm.ee
The cool thing about rsync is that it goes ”BRRRRRRRRR!” like a warthog… the plane… and it can saturate the receiving drive or array depending on your network and client. And getting 180 with rsync… on a SATA drive, can’t really hope for more.
And you can run a quick n dirty test is using dd
$> dd if=/dev/zero of=1g-testfile bs=1g count=1
=> More informations about this toot | More toots from not_fond_of_reddit@lemm.ee
Thx. Ive seen dd commands in guides how to test drive speed, but I’m not sure how can I specify what drive I want to test. I see I could change “if” and “of”, but don’t trust myself enough to use my own modified commands before understanding them better. Will read more about that. Honestly I’m surprised drive speed test is not easier, but its probably just me still being noob xD
=> More informations about this toot | More toots from rambos@lemm.ee
Let’s say you want to test a drive that is mounted on /tmp… you just cd into that directory and you can use my example.
You can use
$> df -h
or
$> mount
to check how your drive is mounted in the OS
Most ”default ” installations will have 1-4 partitions and / being partition 3 or 4.
So if you look at the mount command and / is /dev/sdX3 (where X can be a-z depending on how many drives you have connected) and no other mounts are in the output then every directory under / is on that drive… so you can run my example from your home-directory if you fancy that.
=> More informations about this toot | More toots from not_fond_of_reddit@lemm.ee
Thank you a lot for being patient with me :D
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.62269 s, 232 MB/s
This HDD is obviously working fine and much faster than I thought it can. I guess I have to find bottleneck elswhere
=> More informations about this toot | More toots from rambos@lemm.ee
If I can at least help on stranger on the internet… well, then I have helped one stranger on the internet 😂
=> More informations about this toot | More toots from not_fond_of_reddit@lemm.ee
Hehe you are awesome 😂
=> More informations about this toot | More toots from rambos@lemm.ee
The limitation of HDDs was never sequential Read/Write when it comes to day to day use on a PC.
The huge difference to an SSD is when data is written or read not sequentially, often referred to random I/O.
=> More informations about this toot | More toots from ShortN0te@lemmy.ml
Btrfs doesn’t support using a cache drive
=> More informations about this toot | More toots from possiblylinux127@lemmy.zip
It’s probably a 5400rpm drive, and/or SMR. Both are going to make it slower.
=> More informations about this toot | More toots from catloaf@lemm.ee
5.4k + smr would explain it at write but not at read.
=> More informations about this toot | More toots from Appoxo@lemmy.dbzer0.com
In my very limited experience with my 5400rpm SMR WD disk, it’s perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.
40 MB/s sustained is weird (but maybe it’s just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn’t even slowly fill the cache)
=> More informations about this toot | More toots from Markaos@lemmy.one
Unraid has this with their cache pools. ZFS can also be configured to have a cache drive for writes.
You can also DIY with something like mergerfs and separate file systems.
=> More informations about this toot | More toots from johntash@eviltoast.org
Ive heard about all of these before, gonna do more research. Thank you
=> More informations about this toot | More toots from rambos@lemm.ee
Great that you have a catch drive. I assume the data drive manages everything. So I’m going to call that the manager drive.
Now you just need:
A 1st base drive.
A 2nd base drive.
A 3rd base drive.
A shortstop drive.
A left Field drive.
A center field drive.
A right field drive.
About 3-4 starting drives
A half dozen reliever drives.
A closer drive.
A hittch coach drive
And a couple of base running coach drives!
Got yourself a baseball team!
=> More informations about this toot | More toots from Lost_My_Mind@lemmy.world
Any HDD should be able to get at least 100MB/s sequential write speed. Unfortunately torrent writes are usually very random, which just kills hdd performance. Multiple parallel downloads or concurrent playback from the same disk will only make it worse.
Using a SSD for temporary files will absolutely help. It should be big enough to hold all the files you are downloading at any one time.
You could also try to find a write cache setting that works for you. That way what would usually be many small writes can be combined to bigger chunks in memory before sending them to storage. Depending on how much ram is available I would start at 1GB or so and if it is still bottlenecking try in- or decreasing until it improves. Of course always stay in the range of free ram.
Back when I was torrenting (ages ago) write cache helped a lot. It should be somewhere in the settings menu.
=> More informations about this toot | More toots from lemmylommy@lemmy.world
Oh, you are talking about torrent client settings? I could spare 1-2 GB of RAM, but not more than that (got 16 GB in total). I see this might help a lot, but I would I still be limited with HDD max write speed? Using SSD for temporary files sounds great, but waiting files to be coppied to HDD would slow it down if I understood correctly
=> More informations about this toot | More toots from rambos@lemm.ee This content has been proxied by September (ba2dc).Proxy Information
text/gemini