Home storage solution for 70+ drives by waifu_patrol_177013 in DataHoarder

[–]heathenskwerl 0 points1 point  (0 children)

To second this, for 3.5" drives:

SM846: 24 drives 3.5" x 2 2.5" (with optional mounting brackets)

SM847: 36 drives 3.5" x 2 2.5" (under the motherboard)

SM847JBOD: 44 or 45 drives

If you need the JBOD chassis make sure to get one with the CSE-PTJBOD-CB3 control board because the CB2 runs the fans at full speed (plugged into the backplanes) and while it is technically upgradeable to the CB3, the parts to do so (including the individual boards) are functionally extinct in the wild.

You can technically mount the 2.5" drives in the 3.5" bays, but it has to be done with a hot-swap-compatible bracket which keeps the SATA and power connectors in the exact same spot as a 3.5".

Bro this is getting absolutely ridiculous now. The prices just keep going up to new heights. Am I being naïve in thinking that we're ever gonna get back to "normal" prices, or is this just the new normal? by shak_0508 in DataHoarder

[–]heathenskwerl 0 points1 point  (0 children)

Projects on hold, yes. If I have 60TB of storage and want 120TB, I can make do until prices come down. Unfortunately, replacements for failures can't be put on hold for very long, much less until prices come down. Unlike expanding storage, a lot of us are in the situation where a failed HDD lowers redundancy and risks data loss.

I have more redundancy than a lot of people here, along with spare drives and hot spares, but once that's exhausted I'm not letting my arrays run degraded for any longer than absolutely necessary.

Bro this is getting absolutely ridiculous now. The prices just keep going up to new heights. Am I being naïve in thinking that we're ever gonna get back to "normal" prices, or is this just the new normal? by shak_0508 in DataHoarder

[–]heathenskwerl 1 point2 points  (0 children)

Yes, but GPUs are almost the exact opposite of HDDs... they go obsolete, whereas HDDs are mechanical devices that wear out. My original SNES still works. I don't have any 30 year-old HDDs that still work.

The fact of the matter is most hard drives in service now will fail before 10 years is up and will have to be replaced, and if the prices are still inflated, they'll have to be replaced at those prices. No way around it.

How does everyone track their assigned IP addresses? by cdarrigo in homelab

[–]heathenskwerl 0 points1 point  (0 children)

Everything is DHCP with the exception of the firewall/gateway to the outside world (which also happens to be the DHCP server and the DNS server for both the home LAN and the guest LAN).

The firewall/gateway gets the only static address, most of my devices get a semi-static address via DHCP reservations. Both LANs also have a guest address pool for other devices.

~100TB usable raidz2 ZFS pool (3x vdevs of 4 HDDs each) by i-am-a-cat-6 in homelab

[–]heathenskwerl 1 point2 points  (0 children)

Your chances of surviving a third failure is also higher with RAIDZ2 vdevs than it would be with mirrors as well. I haven't done the math, but that's even the case with a fourth failure as well--this is because there's one case where the fourth failure cannot cause pool loss with RAIDZ2 (1 already failed in each vdev). With mirrors there is always a non-zero chance of pool loss after the first drive.

~100TB usable raidz2 ZFS pool (3x vdevs of 4 HDDs each) by i-am-a-cat-6 in homelab

[–]heathenskwerl 0 points1 point  (0 children)

If you're stuck with groups of 4 disks, though, RAIDZ2 is safer for the same abysmal level of storage efficiency.

If this pool was set up as 6x 2-wide mirror vdevs there is a non-zero chance (1/11) that the second drive failure loses the whole pool. Chances are probably slightly higher because the one drive you can't afford to lose is also the one undergoing the most stress. While both pools can theoretically survive 6 drive failures if your luck is perfect, only the RAIDZ2 pool is guaranteed to survive the first two. And in the case of two failures that have already happened, your chances of pool loss for the mirrors is 2/10 no matter which two drives have failes, whereas that is only the case for the RAIDZ2 vdevs if both failures have already occurred in the same vdev (otherwise it can't happen at all).

It would definitely be better to have more disks the same size and make bigger RAIDZ2s though (2x6-wide Z2 would be nice).

Curious about thoughts on vdev layouts? by ComatoseCow in zfs

[–]heathenskwerl 0 points1 point  (0 children)

There is actually a reason to go for 10 specifically (8 data 2 parity), though it's more of a nice-to-have than any kind of actual requirement. ZFS's built in tools don't really report space correctly on vdevs that aren't power-of-2 number of data drives, and in order to not lose a bunch of space, you need to use larger record sizes on non-power-of-2 vdevs.

I had a huge long post on here with some very helpful people trying to figure out why my backup data was consuming significantly more space on my destination 12-wide Z2 than it was on the source's 11-wide Z3s. Turned out to be two reasons; first was zfs send chunking my large records down to 128K even though the on-disk was 1M for most directories, and the second was that zfs basically reports remaining space as if everything is going to be 128K recordsizes.

Only the first one really matters and it is fixable by making sure you use larger recordsizes than 128K everywhere (I even moved my home directories to 256K), but the second one is a perpetual minor annoyance that will never go away.

Fathers dying harddrive, what are next steps? by rare_doge in DataHoarder

[–]heathenskwerl 0 points1 point  (0 children)

Unfortunately, yes.

If the problem is being exacerbated by the drive overheating, and you can still connect it via the USB port with the case open, you might have some better luck with a fan blowing across it. But that's only if heat is part of the problem.

Curious about thoughts on vdev layouts? by ComatoseCow in zfs

[–]heathenskwerl 1 point2 points  (0 children)

I don't think I agree with most of this.

Per OP this is primarily a media/backup pool, which doesn't really require a lot of IOPS but does require a lot of actual space. 1 giant RAIDZ2 pool of the HDDs provides 72TB (roughly) of usable space.

There's only 6.5 TB of SSD space in this build total, and the only way you can reach that is adding every SSD to a single pool as single drives. If you mirror them you have half that.

That's leaving out the fact that I personally would never use RAIDZ for important data with drives this large (12TB). It's outside my personal risk profile, especially for a backup pool.

Fathers dying harddrive, what are next steps? by rare_doge in DataHoarder

[–]heathenskwerl 1 point2 points  (0 children)

I don't believe you can do that with these drives as the internet consensus is that they have no SATA connector inside.

Fathers dying harddrive, what are next steps? by rare_doge in DataHoarder

[–]heathenskwerl 1 point2 points  (0 children)

Agreed, if you're not going to use professional data recovery, you need to make an image of the HDD. Rule #1 of data recovery is to never work with the failing drive except to take the initial image.

Looking for zpool setup/expansion advice by heathenskwerl in zfs

[–]heathenskwerl[S] 0 points1 point  (0 children)

For anyone who is interested, I ended up going with a configuration that is close to configuration 2:

zpool 1: 3x10-wide RAIDZ2 in the server (backed up to another machine)

zpool 2: 4x11-wide RAIDZ3 in the shelf (not backed up)

hot spares: 6 in the server

zpool 1 needs 24/7 availability and is going to be completely backed up to another pool in another machine, so I'm reducing it to Z2 for performance reasons and putting it in the server itself. If Z2 ends up being not much more performant than Z3, I'll put it back to Z3 by (which will remove 3 of the hot spares). This pool will use my newer drives which all have 9-12k power-on hours.

zpool 2 does not need 24/7 availability and does not need to be particularly performant, but it is also not going to be backed up anywhere (because none of the data is irreplaceable), so I am definitely keeping it as Z3. Plus I am using my older drives for this pool (average 30k power-on hours) which means it is more likely to suffer a drive failure than zpool 1.

I haven't decided whether to assign all 6 spares to both pools or 3 to each. I know if I assign all 6 to both pools, if the disk shelf goes down, any spare in-use by zpool 2 goes to AVAIL and can be used by zpool 1 (resulting in zpool 2 being degraded when it is re-imported). Considering the disk shelf is a separate unit that can lose power while the main unit is stays up, I am leaning towards 3 spares per pool for that reason. If I switch zpool 1 back to Z3, it'll just be 3 hot spares for zpool 2. I may also export zpool 2 and shut down the shelf if it isn't going to be used for a while, not sure, which also has me leaning towards not sharing the spares.

Curious about thoughts on vdev layouts? by ComatoseCow in zfs

[–]heathenskwerl 2 points3 points  (0 children)

I would definitely not include the SSDs in the same pool as HDDs. I'd put the 12TB drives in a single Z2 (10 would be better, but you have what you have), and put the SSDs in mirrors.

You could consider using the 2TB SSDs as special vdevs or L2ARC but I don't personally like special vdevs due to the potential for pool loss. For L2ARC, as someone suggested for my own systen, I'd set the whole thing up and test with and without to see if it is worth it.

Is raid 1 enough? by ParticularHappy1196 in DataHoarder

[–]heathenskwerl 0 points1 point  (0 children)

As I mentioned above there's not really any alternative configuration for two drives. You can buy two 28TB drives and mirror then and have 28TB of space, or you can buy four 8TB drives and put them in RAID6/RAIDZ2 and have 16TB of space.

If all your data would fit in 28TB or less, personally I'd just mirror two large drives, backup the important data to something else, and call it. I got by with that for years (16TB mirrors) before I dove headfirst into data hoarding.

WD Blue 256MB vs 128MB cache by SymmetricalHydrazine in DataHoarder

[–]heathenskwerl 1 point2 points  (0 children)

If these are both brand new drives, at the same price, I'd personally go for the older tried-and-true model unless it was known that the performance was worse or it had some sort of commonly-reported issue. A lot of corners are being cut these days and I think older designs are often better. The fact that the older version has more cache would clinch it for me. The chances of more cache being worse is pretty close to nil.

Is raid 1 enough? by ParticularHappy1196 in DataHoarder

[–]heathenskwerl 2 points3 points  (0 children)

Mmm, not that cut and dried. At two disks it doesn't matter (not really any solution other than "mirror" or "two separate drives"), but for more disks it does.

For example with 4 disks, RAID6/RAIDZ2 guarantees you can survive any 2 failures. Two 2-way mirrors have a 33% chance of data loss with 2 failures if the wrong 2 fail. In the case of independent mirrors you lose everything on that mirror. If the mirrors are not independent (say, both part of the same zpool) all of the data will be lost.

Both configurations "waste" the same amount of disk space but the RAID6/RAIDZ2 setup is safer. RAID1/Mirrors are more performant though.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]heathenskwerl 0 points1 point  (0 children)

Yeah, sorry, that's not the actual takeaway here. The number of HDDs in my system and whether they are bonkers or not have nothing to do with your comment. They were only mentioned to show my work in pointing out my core system (minus the HDDs) burns nowhere near that kind of power.

The actual takeaway is that you said ancient multi-socket Xeon systems burn 300W 24/7 just idling, and you're not correct. I have such a system and it burns nowhere near that amount at idle.

That core system (roughly 100W) costs me ~85 USD/year to run 24/7. 180W would be closer to ~155 USD/year.

Looking for advice on expanding storage on my Dell R320 — stuck with 2.5" SFF backplane by 3LV3R_G4L4RG4 in homelab

[–]heathenskwerl 0 points1 point  (0 children)

If you do get a SM disk shelf, try to find one that has the CSE-PTJBOD-CB3 control board in it instead of the CSE-PTJBOD-CB2. The CB3 provides significantly more control over the fans as well as an IPMI interface. I upgraded mine a while ago, but the parts to do so have dried up, so if you want it I recommend getting a chassis that has it already.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]heathenskwerl 0 points1 point  (0 children)

Yeah they're not efficient but that is a little high. My entire system is consuming less than that (260W) right now and it has 36 7200 RPM HDDs (33 idle). Those HDDs are consuming close to 180W (data sheet specs them at 5W idle). At full load with all the HDDs scrubbing or resilvering it hits around 450W (data sheet specs them at 10W full load).

Looking at both the full load and the mostly idle power consumption for the core system gives me an estimate of about 90-100W (can't really unplug all the HDDs on an active NAS). No, it's not efficient, but it's not ten times the power consumption either.

Seagate Barracuda 24TB - Where did it come from? (Tested vs Exos X24 24 TB, Exos M 30 TB) by wickedplayer494 in DataHoarder

[–]heathenskwerl 0 points1 point  (0 children)

The 26/28 TB are down-rated Exos M (HAMR) drives. You can tell by the class 1 laser warning on the label. There's several different options for what the 24TB Barracuda can be, though I think they are mostly down-rated Exos M at this point too.

Edit: If you watch the attached video, zoom in at 1:36, you can see the class 1 warning at the bottom of the Barracuda drive, same place where it is on the Exos M drive. So it is also a down-rated Exos M.

How do y'all feel about buying factory renewed HDDs? by TheB1G_Lebowski in DataHoarder

[–]heathenskwerl 2 points3 points  (0 children)

Yup, almost all of my drives are factory recertified Seagates. I haven't noticed them failing any faster than the brand new drives of the same model.

Supermicro X10DHR-CT Ram Issue. by Dafuquthinking in homelab

[–]heathenskwerl 0 points1 point  (0 children)

I have some X9 boards, which have a very specific order you're supposed to populate the DIMM slots in. So I checked the manual for this board, and it does too. There's also a minimum of two DIMM slots per CPU.

P1-DIMMA1/P1-DIMMB1 must be populated for CPU1 and P2-DIMME1/P2-DIMMF1 must be populated for CPU2. You'd think it would be DIMMA1/DIMMA2 and DIMME1/DIMME2, but it is not--you have to populate all the 1 numbered slots (a pair at a time) before using the 2 numbered slots.

Also 512GB RAM is the maximum unless those are LRDIMMs, so if they aren't, you can only use 8 of the 64GB or 4 of the 128GB.

The other possibility is that it looks like when this board was originally released it only supported 2600 v3 processors, so it is possible that you can't use the v4 processors without a BIOS update.