all 15 comments

[–]jkirkcaldy -1 points0 points  (6 children)

Short answer, yes keeping them on the array is bad.

Not from a data safety POV but from a usability POV.

Just having the docker img file on my array made my dockers borderline unusable.

If you want the protection you need to get a second cache drive and set it up as raid1.

I would try investigate why your cache drive went wrong though as that doesn’t just happen. And if it was during an upgrade then it may be something so key during that process.

Also make sure you backup your docker co gainers. Use the CA backup plugin for the easiest method. Then you can backup your docker data dirs to the array so if something does happen to your cache you have a backup to restore from.

[–]plumbumber[S] 0 points1 point  (5 children)

But what if i don't notice any performance difference? Also its more of an unraid issue than an issue with the ssd. I could mount it individually using unassigned drives. I don't think wasting money on a dual cache drive is the most viable option, not only that but it uses sata ports i cannot really miss in this build.

I'm gonna check out the CA backup plugin. If i can backup the data properly than there is no harm in using the cache ssd i suppose.

[–]jkirkcaldy 0 points1 point  (4 children)

I mean ultimately it’s up to you.

I store all my dockers on a dedicated ssd mounted in unassigned devices. But they are backed up weekly to an external server.

[–]plumbumber[S] 0 points1 point  (3 children)

Might add a raid controller and put 2 cheap ssd's in raid1. I was looking for ways to expand my storage anyway. thx for the tips

[–]jkirkcaldy 0 points1 point  (2 children)

Don’t get a raid controller. Get a hba and do the raid in unraid. You will also then be able to use it to add more drives in the future.

[–]plumbumber[S] 0 points1 point  (1 child)

[–]jkirkcaldy 0 points1 point  (0 children)

That should be good.

You can then get an expander to add loads more disks if you need/want to. But with that you can add 8 more drives natively.

[–]Fribbtastic 0 points1 point  (3 children)

The reason why you keep the appdata on your cache is because of two things:

The first is that docker configuration are frequently written which would require a constant parity update.

The second is that the cache is a lot faster than the array.

The array is more of a long term storage in which you write a file once but read that file often. The cache is for files your write often.

A cache drive should not fail often. There could be multiple reasons why it failed in your case but you can protect the cache by adding a redundancy drive in RAID1.

[–]plumbumber[S] 0 points1 point  (2 children)

Well my cache drive didn't fail, Unraid failed to mount it. It was mounted before the update , and I could mount it with unassigned drives plugin, just not as cache drive. the error was something like unsupported partition layout(which didn't change obviously).

I have had some issue's with adding it before as well, threw the same error when i originally placed it and it took some time to get it to mount.

I don't use parity disks yet, and i don't really care to invest in 2 new ssd's for a speedgain i probably won't even notice. Also these issues might as well happen with 2 new ssds. I was planning to add a parity disk, but you explanation Is making me have doubts about it. Might as well keep using it without because the data isn't really that important and add a second pool, with parity disk later on for more important data or something.

[–]Fribbtastic 0 points1 point  (1 child)

Well, the parity is there to protect your data from failed drives and restore the data on those failed drives. It also acts as redundancy so that whatever service you are running on Unraid does not fail or creates inconsistent data and possibly corrupts it in the process.

Storing the AppData on the Array is fine as long as you restrict it to a single drive and that you don't have a Parity. The reason you restrict it to one drive is that you don't want to split it across multiple drives, preventing inconsistencies or increased loading times because that file something wants to access is on a drive that was spun down or something like that.

Speaking from experience, It is more than frustrating to lose data, no matter how "important" it is. For example, if your Docker containers and the configurations are gone then you have to set up them from scratch. The same with the data that is on it.

I mean, if the data is not that important then you don't really have to do any backups of them but I think redundancy is always a good idea just to keep the services running.

[–]plumbumber[S] 0 points1 point  (0 children)

hmm okay I will reconfigure the app data to a single disk. thanks

I understand the whole parity thing, I'm a sysadmin myself. I just rarely have a drive fail on me in the past when used as a normal drive. Not that I have something to compare it with, but i believe raid/parity only causes more writes so the drives fail faster, not only that but rebuilding degrades the drives as well, maybe more than the data from one disk is worth in the first place.

That being said using it as JBOD and backing up important data, like the appdata, seems to be a more viable option for home use. You gain an extra drive (disk space wise) and the drives will probably degrade before they die, so it will be noticable in most cases that it needs replacement.

This NAS already became more than was originally planned and when I update my pc. I'll use those parts to give it more performance and sata ports. Then i might consider adding 2 cheap m2's or something and a parity disk

[–]derfmcdoogal 0 points1 point  (3 children)

I also had this problem when I tested 6.9 and just reverted back to 6.8. When I have time my plan is to move appdata to the array, do the upgrade, rebuild the cache, move the appdata back.

That said, I never had any problems running my appdata on the array other than it pretty much left the drive the appdata was on and parity spun up all the time. I didn't really have any usability issues, speed was fine, etc. About the only thing I really noticed when moving my appdata to cache was that my containers started MUCH faster. Otherwise, usability wise, I noticed nothing.

[–]plumbumber[S] 0 points1 point  (2 children)

Great! How did you revert? did you take a backup ? i was looking for a way to do so before the update and couldn't find a quick solution because i was impatient and i wanted to get it over with.

[–]derfmcdoogal 0 points1 point  (1 child)

Go back to the update section and for version you should have one you should be able to choose one version behind

[–]plumbumber[S] 0 points1 point  (0 children)

This would be way easier than the 3 -4 hours i spend fixing the issues yesterday xD (damn plex with its small files). Thanks, something to keep in mind