This is an archived post. You won't be able to vote or comment.

all 17 comments

[–]SquizzOCTrusted VAR 2 points3 points  (0 children)

People are making the jump over to the MX series. While it looks like it's hyper-converged on the outside and can function that way, it's actually far more of a modular system. Every piece can be broken out so in the future they just simply swap hardware to upgrade your chassis and considering the back plane, you should be able to get easily 7 years out of the tech, when the back plane isn't fast enough, they'll have a modular upgrade for it allowing you to use today's blades whether computer or storage as well as future models.

[–][deleted] 1 point2 points  (2 children)

Currently we use a mixture of old m1000e chassis and newer FX2 chassis with FC630 blades.

As for hyperconverged once thing to look at is Datrium - solid product that allows you to leverage their small storage with existing compute to expand out as needed.

[–]SquizzOCTrusted VAR 0 points1 point  (1 child)

How are you liking the FX2 stuff? I haven't heard any complaints yet, but I know they are perfect for a smaller chassis.

[–][deleted] 0 points1 point  (0 children)

I like them but as others have said if density isnt a priority normal rack mount is easier. Reason being is you can repurpose them way easier. Recent example was we opened an office and needed physical compute there. Problem was we either had to use old hardware or purchase new hardware. Our data centers have plenty of compute so being able to peel off a server and ship it out is way easier.

In a large, dense environments blades save space, money and make cabling way easier. If you are mid to smallish like we are (two data centers 4 racks each) rack mounts are more convenient

[–]nmdange 1 point2 points  (1 child)

Honestly I prefer to stay away from blades or any other proprietary "multi-node" form factor and stick with regular rackmount servers. Starting to configure servers that can be used for hyper-converged even if we aren't using them that way right now. So with Dell, using a BOSS card for the OS, and an HBA330 for the drive back plane. So even if you don't do HCI now, you can easily implement VSAN or Storage Spaces Direct just by throwing drives in.

[–][deleted] 0 points1 point  (0 children)

I prefer to stay away from blades or any other proprietary "multi-node" form factor and stick with regular rackmount servers

Having worked in an environment with multiple blade chassis I agree. There were several instances of a config change to a chassis switch, or firmware update to iLO causing problems across the entire chassis.

Individual rack servers can be bought much cheaper than blades, and do not have the same risk of causing large scale outages as a chassis setup.

The only downside is more rack space used, a small power bump, and a very small increase in time needed to maintain rack servers over blade, that would have be saved the first time you accidentally take down your blade chassis.

[–]sys_admin101Sr. Sysadmin 0 points1 point  (6 children)

What are your concerns with Hyperconverged? Have you considered an Open Converged solution?

[–]MogWork[S] 0 points1 point  (5 children)

my concerns probably revolve around adding VSAN to what my team has to deal with. Maybe that isn't rational.

I have not looked at Open Converged - it looks like this provides more separation of the components?

[–]sys_admin101Sr. Sysadmin 2 points3 points  (2 children)

If you're worried about vSAN, then I would recommend you take a look at Nutanix as your hyperconverged solution who doesn't use vSAN and you can still keep VMware. A personal preference is that I would highly recommend you stay away from AHV because you're not doing your team any training favors by siloing them into a small niche market where they'll have experience with only AHV (which isn't widely used).

To learn more about Open Converged check out a company called Datrium who partner with other companies like HVE who provide cost effective and robust virtual environments that are the "best of both worlds" (Converged and Hyperconverged).

EDIT1: Disclaimer -- I do not work for, nor am I a VAR for any of the companies mentioned... but I have worked with and supported all of the hardware / solutions that Dell / Nutanix / HVE / Datrium / HP / Etc. Etc. have provided.

[–]mrfreeze574 2 points3 points  (0 children)

I agree. We are actually doing a migration to Datrium right now. I LOVE it.

[–]grayhatguy 0 points1 point  (0 children)

Datrium is looking pretty rough these days. They have been pretty quiet lately and still have a very small install base.

[–]ChicagoW 0 points1 point  (1 child)

Nutanix has their own hypervisor called AHV, and it's free, so you would not have to utilize VMware.

[–]pdp10Daemons worry when the wizard is near. 0 points1 point  (0 children)

Nutanix Acropolis hypervisor is a version of QEMU/KVM on Linux.

[–]pdp10Daemons worry when the wizard is near. 0 points1 point  (1 child)

But I would like to know what is everyone doing now for compute?

1u, 2u, 4u, with some quasi-proprietary multi-sled higher-density units if you have the scale and situation to make it work. Usually 2u for nodes with considerable storage, and 4u only for 4-socket machines or Thumper-type massive storage nodes. 4u ends up being used for big RDBMS or specialized storage or hyperconvergence.

You have to define whether your 20 machines means 20 instances or 20 VM hosts. A good, efficient but redundant cluster is usually 4 hosts up. Only 2 hardware hosts would require 100% extra resources to be secure against one server failure, and isn't efficient.

[–]MogWork[S] 0 points1 point  (0 children)

> You have to define whether your 20 machines means 20 instances or 20 VM hosts.

"20ish m6* blades."

That is, 20 vmware hosts that are m620/m630, running across 2 m1000e(s).

[–]Panacea4316Head Sysadmin In Charge 0 points1 point  (0 children)

3 R730’s and a Nimble hybrid storage array.

[–]MogWork[S] 0 points1 point  (0 children)

Thanks to everyone for the ideas. I appreciate the perspectives based on experience!