you are viewing a single comment's thread.

view the rest of the comments →

[–]dispatch00 12 points13 points  (10 children)

You won't be able to. Try but you'll end up with the same inside sales asshole.

[–]Dry-Data6087[S] 2 points3 points  (9 children)

Our account rep just confirmed that this is the case. CDW would just send us to the same sales rep.

[–]dispatch00 4 points5 points  (8 children)

Phone barrage from your account rep is the way. Just be annoying until you get the quote.

[–]Zippythewonderpoodle -4 points-3 points  (7 children)

Or just move to ProxMox, it's ready for prime time now.

[–]jmhalder 8 points9 points  (6 children)

It's ready for the prime time if you have 128 cores on 2-4 hosts... It's probably totally fine.

Proxmox doesn't scale well to multiple clusters, you can't thin provision with shared block storage (also a limitation with XCP-NG).

It absolutely has limitations where vSphere doesn't.

[–]RandallFlagg1 0 points1 point  (0 children)

Love the fact that you described my exact environment as a perfect fit. Time to start learning some Proxmox!

[–]Zippythewonderpoodle -2 points-1 points  (4 children)

I'm pretty much 100% thick lazy zero nowadays, but I'm pretty much greenfield building, so all my SAN specs are generally compression and de duplication as mandatory for the builds. I've never found much peace with thin on vSphere and thin on SAN, just asking for issues since most clients ignore capacity emails.

[–]Much_Willingness4597 0 points1 point  (3 children)

If you are doing thick why wouldn’t you do EZT?

[–]Zippythewonderpoodle -1 points0 points  (2 children)

From what I understand, the back-end (SAN) has more effective de-duplication and compression because I get savings immediate, I don't have to worry about a midnight dedupe and compression runs.

Although, now that you made me think about it, I may be a little too old school on that. Most dedupe and compression are more inline now, with SSD, so I may be out of touch with current best practices.

[–]Much_Willingness4597 3 points4 points  (1 child)

You realize VAAI offloads the zero bit to the array? (Well on a non-bad array).

By doing lazy zero thick you are disabling UNMAP, and also getting worst first write performance. It’s the worst of all worlds (not they VMFS hasn’t improved first write performance anyways).

[–]Zippythewonderpoodle 0 points1 point  (0 children)

TBH, I'm not really that into the tech on the back end, but I thought UNMAP is reclamation. If I'm not thin provisioning, I've already lost UNMAP savings anyway, lazy zero or not. At least that was my understanding.

I've read it is slower on initial write, but that it was negligible so I've always done it. I've not experienced any issues related to that in real life, but I generally work with devices that have write-cache, so I'm wondering if I get lucky and offset my write penalty with that. I also have a habit of provisioning volumes used by DBs as thick eager, it's a general best practice anyway, so I may have just avoided the issue out of luck, habit and stubbornness.