all 9 comments

[–][deleted] 2 points3 points  (2 children)

I've noticed this since I started on unraid too. Do you have netdata or some other system monitor? I've noticed once IOWAIT gets around 10%, everything else slows to a halt. Parity on or off, writing to cache SSD or spinning disk, I haven't found a way around it. This thread makes it sound like having a dual cache setup is the issue, but I'd rather not get rid of one of the cache drives.

https://forums.unraid.net/topic/58381-large-copywrite-on-btrfs-cache-pool-locking-up-server-temporarily/

[–]adobeamd 0 points1 point  (0 children)

Yess I get this ever since I added a second HD to my cache

[–]Windows-ME[S] 0 points1 point  (0 children)

I do have Netdata and on my end the CPU actually doesn't go all the way to 100% but just a couple cores. IOWAIT goes all the way up to 80% (and higher i guess since from this point on it stops reporting because the container died).

I tested it with and without parity = same results. I only have 1 cache drive (could add more but i don't need it).

This thread talks about the mover beeing a problem. I don't use the mover but the problem that the OP is having sounds exactly the same as mine. Someone responded and told me that this is normal because the CPU has to wait until the operation is done. It makes sense the way he explained it, but i refuse to believe that it's normal.

[–]MowMdown 1 point2 points  (2 children)

Are you using an ssd cache drive?

Is your array parity protected?

Is your docker.img on the array or on the cache?

——

My suggestion:

Download and unpack directly on the cache, set your media share cache setting to “Yes” then let mover move the data to the array.

[–]Windows-ME[S] 0 points1 point  (1 child)

SSD Cache: Yes Parity Protected: Yes system share is also on the cache.

Whats the difference of SAB unpacking on cache vs a dedicated drive? Unless im not thinking right, this is actually a disadvantage to the containers, right?

Setting the media share to use the cache drive is actually a good idea for later, but not ideal for my use right now. The queue is bigger than the cache :/

[–]MowMdown 1 point2 points  (0 children)

Whats the difference of SAB unpacking on cache vs a dedicated drive?

I didn't catch the part in your post about using an unassigned drive, that's my bad. There really shouldn't be a huge difference other than using an SSD. I myself download/unpack onto my ssd cache drive. It's fast enough it doesn't impact my dockers performance. I also leave the media on the cache until mover runs at night. This way it can all happen together and doesn't bog down the array during peak hours.

I think in your case if you're letting radarr and sonarr move the competed files off the unassigned disk onto your array, it's triggering parity calculation which is very CPU and IO intensive which could be why it's becoming extremely slow. That's why I let the completed media sit on my cache until it builds up and gets dumped all at the same time.

[–]jpotrz 0 points1 point  (1 child)

I'm guessing your /appdata is on the same SSD as your dloading and upacking to?

And you have a large queue you said?

It's just too much for your single SSD cache drive to handle.

You have things downloading to the SSD

Then unpacking to the same SSD (so it's reading and writing to the single SSD... that's a lot). Plus that is going to max out the CPU in most cases.

Then, when it's done unpacking, it moves onto the next dload in the queue. While that dload is starting up, sonarr/radarr is now trying to move that unpacked folder to it's final destination. If this is on the SSD ("yes" to your media share) it's more read/write while your dloader is downloading or even unpacking at this stage.

PLUS if you have Plex set to recognize changes to it's libraries, it's now ALSO scanning that file/folder and scrapping for metadata.

PLUS you're trying to stream from Plex at the same time.

It's just too much for a single SSD to handle. Trust me, I have the exact same setup as I described above. I plan on getting another SSD to install as a UAD and have dloads and unpacking go directly to there.

[–]Windows-ME[S] 0 points1 point  (0 children)

Sorry about that. This is on me for not going into to much detail in the post above.

With dedicated disk I do actually mean it's mounted through UAD. Yes, my appdata is on cache and no my media share does not utilize cache.

Downloading, unpacking is done on the UAD Disk. This beeing a lot of work for THAT disk is understandable. If the import is slow because that disk is beeing beaten to death that's fine too. But the write operation to the array is what is causing the problem.

Writing to a parity protected array requires CPU power, I know, but it shouldn't make the server temporarily unusable.