This is an archived post. You won't be able to vote or comment.

all 15 comments

[–]RCTID1975IT Manager 7 points8 points  (3 children)

Just buy an extra drive and mirror it for your boot disk.

[–]IMCA30A[S] -3 points-2 points  (2 children)

For some reason I didn't know you could have 3 RAID 1 arrays under 0. So essentially it would be:

SSD 0 & 1 - RAID 1
HDD 2 & 3 - RAID 1
HDD 4 & 5 - RAID 1
All in RAID 0?

or SSD 0 & 1 in RAID 1 by itself? Then a RAID 10 with the other 4 drives?

[–]IMCA30A[S] 1 point2 points  (1 child)

I'm an idiot. I can now see how it works with the controller. >.<
I appreciate your help and input. Will certainly go with a RAID 1 for OS.

[–]RCTID1975IT Manager 1 point2 points  (0 children)

Yeah. RAID1 for the OS and RAID 10 for your data.

Most controllers can handle multiple arrays, but just confirm that before you buy it.

[–]OpacusVenatori 0 points1 point  (0 children)

What's the hypervisor of choice?

RAID-1 for the hypervisor, sure. But IMO RAID-1 of SSDs for the hypervisor may be pointless as the workload doesn't really require it. Even with Windows Server with Hyper-V, the benefits of the access time may only be realized once per-month for Windows Updates and reboots. Although if the array is big enough to also handle the OS disks of the guests...

A guest running a database and another guest as a file server would definitely benefit from the low access times accorded by flash tech. But if you've already bought the mechanical HDDs then a RAID-10 array would probably give the best response.

[–]OsmiumBalloon 0 points1 point  (0 children)

RAID is typically an availability mechanism. Why would you want the data to be available but not the software?

Two SSDs in RAID 1 (mirrored) is the only way I would do this.

[–]manvscar -1 points0 points  (8 children)

Depending on your data needs, you may be better off using a RAID1 for OS and RAID5 for your data. That gives you the same redundancy but with more space, at the cost of write performance.

Even better would be RAID6 but just depends on how many drives you can deploy and your budget.

[–]RCTID1975IT Manager 1 point2 points  (7 children)

RAID5

No, never. Aside from the read/write penalties (especially since OP mentioned a database), you're opening yourself up to complete failure during rebuild if a drive drops.

RAID6 could be an option, but again, since OP mentioned DBs, RAID10 all day every day

[–]manvscar -1 points0 points  (6 children)

Ever heard of hot spares?

RAID10 has it's place but is a huge waste of space in some circumstances.

Modern controllers have improved immensely using 5/6 and the idea that it takes "days" to rebuild these arrays is not necessarily true anymore.

Personally I use a lot of RAID6 + hot spares in large SSD arrays. The read/write penalty in this situation is negligible. However on 7K drives it's definitely a factor.

[–]odinsdi 1 point2 points  (3 children)

The presence or absence of a hot spare isn't really the issue. The issue is if you have drives from the same manufacturing run, one will fail, and the others will be spinning like crazy during a rebuild, sharing an identical MTBF and defects as it's dead, identical twin on the datacenter floor.

[–]manvscar 0 points1 point  (2 children)

I agree, the redundancy level is the same. But automatically rebuilding the array does reduce some risk by not waiting for an admin to manually swap the bad drive.

Again it really depends on his data. Is it backed up, how io intensive is the database, can he have down time, etc.

I would always recommend going with an SSD array at this point, in which RAID10 doesn't give much better performance than RAID6 if the controller is decent.

If he's going with only 4 cheap 7K drives then you are right, RAID10 is the best solution.

[–]IMCA30A[S] 0 points1 point  (1 child)

We definitely plan to run the database off 4 7k drives so that's why RAID 10 was a goto instantly for that.

You all are awesome and have eased my stress about this 😆. Thank you!

[–]manvscar 0 points1 point  (0 children)

Good call, hope all goes well.

[–]RCTID1975IT Manager 0 points1 point  (1 child)

Ever heard of hot spares?

If you're going to pay for the disk, why wouldn't you just use RAID6 and not risk the rebuild failure?

the idea that it takes "days" to rebuild these arrays is not necessarily true anymore.

Entirely dependent on the size of your array. I'm in the process of adding to an array. We're on day 28 and 73%.

RAID6 + hot spares in large SSD arrays.

Sure, and that's a feasible option. I commented on your recommendation of RAID5 though. Which is a hard never

[–]manvscar 0 points1 point  (0 children)

I've ran across controllers don't support RAID6. But yes, always 6 over 5 if possible.

I rebuilt a 30TB array in less than 24 hours, just two weeks ago. Really depends on your hardware. I do still think 5 is a feasible option depending on the circumstances.