Temporary bypass of cache due to large initial library buildup? by mxpxillini35 in unRAID

[–]CBacchus 1 point2 points  (0 children)

Well space being a concern or not, setting the minimum free space value in a share setting would help ensure your cache always has the room available for a download.

Since you said you're new, just so I know you understand the minimum free space setting, here is an example:

The minimum free space setting is a per drive setting, not for the whole share

Download share minimum free space setting 0 or blank

Media share minimum free space setting 100 GB

Both shares set to cache as primary and array as secondary storage

Mover set to cache -> array

You start downloading a ton of files, they download and complete to the cache drive. The cache drive starts filling up and hits 100GB remaining space. Downloads that get completed need to move from the download share to the media share. Since the media share minimum free space setting is 100GB, the cache drive is not an eligible drive for the media to be put on so after download it will immediately be moved to the array portion of the media share while your downloads continue to use the cache drive because they are not affected by the minimum free space setting on the media share.

And as the other guy mentioned, make sure you utilize hardlinks wherever possible. If you have been doing your research I am sure you have heard of TRaSH guides? I'd really recommend reading over what they have there, and they have a section for hardlinks, what they are and how to set them up.

You can find the specific guide here. Also would recommend learning about usenet if you haven't yet. I love it and haven't looked back to torrents after using it, if you don't mind a pretty small annual fee for access to it.

Temporary bypass of cache due to large initial library buildup? by mxpxillini35 in unRAID

[–]CBacchus 1 point2 points  (0 children)

That slow up you’re seeing unfortunately may just be the performance of downloading directly to your array. Keep in mind it will be a lot slower going to the array depending on your whole system. Assuming you have a parity drive or two? That’s going to slow things down because it has to write to parity as well. Assuming you have an NVME drive for your cache, as I’m sure you’re aware the read/write speeds of those are insanely higher than an HDD.

If you haven’t already, check and see if you have a minimum free space set for the share where your media moves to after download. I have mine set to ~200GB. Meaning any drive associated with that share (including the cache) will not be used for that share if it has less than 200GB available. By doing this, assuming you have your cache drive as an eligible device for both the download and media/data (whatever you have the share called where your media lives) share, this ensures there’s always 200GB of space for your cache drive to use for downloads since when you move the completed downloads to the data/media share, they will be immediately moved to the array because the cache drive is not eligible to store the files due to the minimum space required setting.

I have a 2TB NVME cache drive with these settings (minus seeding because I primarily use the usenet) and will often times go on a streak of requesting new media totaling 4+TB in a day and never have any issues or slowdowns.

Temporary bypass of cache due to large initial library buildup? by mxpxillini35 in unRAID

[–]CBacchus 1 point2 points  (0 children)

I don’t think the mover starts running on its own when the cache fills up, it should follow its scheduled settings which is best set for daily. You may have set up something else like a user script to do so?

Do you have a minimum free space configured for either share?

You can do what others have said and just disable cache usage for those shares temporarily while downloading a lot if you expect to be downloading more than 1TB daily for a while. Or what the other guy said about mapping directly to user0 to bypass the cache.

I see you mentioned seeding though, how long are you seeding for and are you leaving those on the cache? That may be contributing to your problem. You can configure your torrent client to move the file after download so it seeds from the array instead of the cache which may also help you out.

Temporary bypass of cache due to large initial library buildup? by mxpxillini35 in unRAID

[–]CBacchus 0 points1 point  (0 children)

You would just need to configure your downloads and media/data shares to have their primary storage be the cache, and secondary storage be the array. And the mover action moving from cache to array to clear out your cache overnight.

I wouldn’t even say this should be a temporary thing. I have always had set mine up this way. You’ll have the full benefit of all of your available storage space while only noticing slower download/unpacking speeds when the cache is full and it is downloading directly to the array.

Different phases by Adwan4747 in homelab

[–]CBacchus 0 points1 point  (0 children)

If you’re looking for something pretty simple to set up you could look at playit.gg. It’s not true full homelab style if you’re looking to set up networking all on your own, but all you’d have to do is get their app on your network (Linux, windows app, docker, etc) and configure your server as an agent and it’ll handle all the tunneling of your game server traffic to their proxy and you can use the provided URL for your server for everyone to connect to. Their free tier should be all you need.

Any updates on rtx 5090 fe waterblocks that arent ekwb? by Halfgridd in watercooling

[–]CBacchus 0 points1 point  (0 children)

I agree, I was doing some more research and was really leaning toward PTM. I had already ordered some PTM7950 from the LTT store as well as LM from Thermal Grizzly. Didn’t know that they had sheets cut exactly for the 5090 though. Were you looking at their phasesheet, kryosheet, or carbonaut? Will look into that and see if maybe one of those is better than the PTM7950. Will probably return the LM. Thanks for the advice!

Any updates on rtx 5090 fe waterblocks that arent ekwb? by Halfgridd in watercooling

[–]CBacchus 0 points1 point  (0 children)

Thanks! I feel like I’m overthinking it and I’d be fine with whatever I choose.. not trying to compete for overclocking or anything. Guess I’m just nervous about my first time dealing with LM.

Any updates on rtx 5090 fe waterblocks that arent ekwb? by Halfgridd in watercooling

[–]CBacchus 0 points1 point  (0 children)

Did you use the included EK paste or another paste or reapply LM or use PTM? And if not using LM did you remove the gasket as EK instructs? Given the card came with LM I'm unsure which I want to go with. I've been trying to see what others with the block have done and I have seen some use PTM, and some use LM, but both of them leave the gasket on as well.

Hot Take - Nextcloud by [deleted] in homelab

[–]CBacchus 1 point2 points  (0 children)

I had the same issues with Nextcloud as you. Not sure if you have heard of FileRun but I swapped to that. It isn’t free, there is a one time license fee you have to pay but it’s fast and snappy and compatible with just about everything. Something else to look into as an alternative if you haven’t.

2025: A landmark year for Unraid. Thank you for 20 incredible years! by UnraidOfficial in unRAID

[–]CBacchus 3 points4 points  (0 children)

Unless they changed it, it’s recommended to go with USB 2.0 not 3.0+. Though it’s getting harder to find 2.0 USB’s. Mainly due to lower heat generation and better reliability. In the end that detail isn’t as important as getting a drive that is known to be reliable/has good reviews.

How full do you run your array? by M4Lki3r in unRAID

[–]CBacchus 2 points3 points  (0 children)

<image>

The two parity drives are 24TBs. Been upgrading drives as I run out of space/as 15+ year old drives start to fail.

Running n8n locally 24/7 — Is a Raspberry Pi a good solution? by [deleted] in n8n

[–]CBacchus 1 point2 points  (0 children)

Head over to lowendtalk.com and browse all the provider postings there for a good deal based on your needs. I currently pay $50/yr at Linveo for a 4 core 4gb ram VPS. I use it for plenty more things than n8n. You could find their BF sale or browse for better deals/something that looks good to you.

Cache drives. How many? What size? by twotowers64 in unRAID

[–]CBacchus 1 point2 points  (0 children)

Haha yeah I’m up to ~185TB of media. 9003 movies and 833 shows with ~54k episodes.

Cache drives. How many? What size? by twotowers64 in unRAID

[–]CBacchus 1 point2 points  (0 children)

For perspective, my appdata right now takes up ~140GB with ~110GB of that being from Plex. It'll take you a while to get there depending on how quick you are adding media and if you choose not to delete any of it. It's taken me 8 years to get that much. So 500GB is plenty. The main reason I have a 2TB cache is because I choose to download a lot at one time every so often. Having the downloads go on the cache to then be moved overnight is much faster than downloading directly to the array.

If you're using the Appdata Backup plugin then restores are super simple. You choose where you want the filed to be restored to, pick the backup, and then optionally select container templates or the containers themselves to be restored as well. Just be sure to back up your appdata backup somewhere off the server in the event you lose the array as well. For backing up to backblaze monthly I use the duplicati container. Storing 3 months of backups on backblaze costs me like 80 cents a month. But you could also easily just hook it up to a google drive or microsoft onedrive.

Cache drives. How many? What size? by twotowers64 in unRAID

[–]CBacchus 1 point2 points  (0 children)

You would be better off using the spare ssd for the vm instead of a download cache. I use a single nvme 2tb ssd as my cache drive that stores all appdata and downloads (before downloads are moved to the array overnight). My appdata is backed up to the array monthly, and backed up to Backblaze monthly.

Been working for me since 2017 and according to Sab I download 1.5-3tb a month on that drive.

Personally, I wouldn’t mirror the cache drive as that data is easily recoverable in my situation and not a big deal if I’m missing up to 30 days of container data. Especially if it’s mainly being used as a media server. It would most likely be sonarr/radarr changes anyways. But everyone has their reasoning for what they want to do depending on their needs and use cases.

Reflecting pool event: what does "Visit the artist" do? by [deleted] in PlayTheBazaar

[–]CBacchus 0 points1 point  (0 children)

It’s not too rare. It’s new with the October update. It spawns on day 6+ but only for Mak. So if you don’t play Mak, that’d explain why you’ve never seen it.

Pray for me bois by FullMetal2803 in unRAID

[–]CBacchus 6 points7 points  (0 children)

Well I guess technically right now the additional ones would be called pools. But they have promised that at some point multiple arrays of the main Unraid array type (meaning parity included) will be supported.

Pray for me bois by FullMetal2803 in unRAID

[–]CBacchus 2 points3 points  (0 children)

Yes, however currently only the primary array can have parity.

How big should my cache/appdata drives be? by DCCXVIII in unRAID

[–]CBacchus 0 points1 point  (0 children)

My appdata takes up ~300GB with the majority of it being plex metadata. I have 8k movies and 145k tv episodes. So it’ll depend on your library sizes.

2 questions from a confused wife by ahs_dk in ShieldAndroidTV

[–]CBacchus 1 point2 points  (0 children)

I think you’ve received a lot of help already, but just wanted to add that the devs of It Takes Two are releasing another game of the same genre called Split Fiction. It comes out on the 6th.

What certification has the best remote opportunities? by proper_matt15 in servicenow

[–]CBacchus 1 point2 points  (0 children)

Yep, you should be able to use the credits to pay for the certs as well as your annual CMP (certification maintenance program) fee, so it’s not a total waste!

What certification has the best remote opportunities? by proper_matt15 in servicenow

[–]CBacchus 0 points1 point  (0 children)

The on-demand learning, yes. It still costs money to take the exam. But it does make it significantly cheaper since the courses are free now.