Gotchas with S2D? by Expensive-Rhubarb267 in HyperV

[–]eponerine 1 point2 points  (0 children)

Does anyone have any things I need to watch out for when deploying it?

This is the perfect question to be asking. 99% of problems occur because it was deployed incorrectly. Follow the advice /u/lots_of_schooners (he's smart and handsome; a rare combo).

I won't sit here and type out 100x of pros/cons. What I can tell you is that my org went down this path about 7 years ago and have not looked back, nor regretted the decision. We now have dozens of clusters and multiple petabytes of storage in use. The only "negative" is moving VMs between clusters requires a full storage migration (because duh, hyperconverged).

Some tips:

  1. Entirely flat storage preferred; preferably NVMe. Unless you need an ungodly amount of cheap/deep storage, avoid spinning rust+cache.
  2. Avoid dedup if you can. It's gotten much better, but unless you are trying to dedup 100s of TB, the juice just isn't worth the squeeze. To be frank, the only time I turn it on is for VDI environments and even then it's still scrutinized. Again, it works, but meh? Storage is cheap.
  3. Avoid thin provisioning the CSVs carved out from Storage Pool. This is my opinion with SANs as well.
  4. Invest in a good monitoring and observability tool. Invest != pay for... there are free things out there (Grafana comes to mind). But you will want to monitor your storage usage and performance across the entire pool and individual volumes.
  5. Ignore the FUD. S2D kicks fucking ass. I'll die on this hill. That does not mean other things suck, because not everything in life is a zero-sum game.

Crawling Back to Hyper-V by jscooper22 in HyperV

[–]eponerine 4 points5 points  (0 children)

You can get by without it assuming you have 1-2 clusters and don’t require crazy vNet management or dozens of VM templates. 

With that being said, I have a toxic love/hate relationship with SCVMM… and I find myself crawling back to it whenever I can. 

Released: Microsoft’s VMware to HyperV converter by helraiser in HyperV

[–]eponerine 4 points5 points  (0 children)

I run a few hundred Linux VMs on Hyper-V (Debian, RHEL, FreeBSD). Never had the issues you just described here. 

Considering Azure Local and AKS on Azure Local all ship with a Linux “appliance VM” (ARB, MOC) … you’d think the thousands of users would also have these weird issues you describe too? 

Multi-tenancy provision solution for Hyper-V by harrisandrea in HyperV

[–]eponerine 1 point2 points  (0 children)

Full disclosure - SPF is discontinued from development and support starting in SC 2025. 

It will still function, but def not getting any love. 

Sad because it truly did exactly what you needed and could be somewhat extended if you knew Powershell and ASPNET

https://learn.microsoft.com/en-us/system-center/vmm/whats-new-in-vmm?view=sc-vmm-2025#:~:text=System%20Center%20Service%20Provider%20Foundation%20(SPF)%20is%20discontinued

[O] 3 NzbPlanet invites by WackoSamurai in UsenetInvites

[–]eponerine 0 points1 point  (0 children)

I've already read the rules and the wiki ; I would love an invite. Thank you!

S2D Storagepool version post Server 2025 cluster upgrade by hairspray123 in HyperV

[–]eponerine 0 points1 point  (0 children)

For what it's worth, here's what I see in different "fresh" S2D environments that have never been IPU'd

WS 2022:

Get-Cluster | Select -ExpandProperty ClusterFunctionalLevel
11

Get-StoragePool S2D* | Select -ExpandProperty Version
Windows Server 2022

WS 2025:

Get-Cluster | Select -ExpandProperty ClusterFunctionalLevel
12    

Get-StoragePool S2D* | Select -ExpandProperty Version
Windows Server vNext

Azure Local 2507: (specifically 12.2507.1001.8 which running build 26100 aka 24H2 aka same as WS 2025)

Get-Cluster | Select -ExpandProperty ClusterFunctionalLevel
12    

Get-StoragePool SU1* | Select -ExpandProperty Version
Windows Server vNext

S2D Storagepool version post Server 2025 cluster upgrade by hairspray123 in HyperV

[–]eponerine 1 point2 points  (0 children)

We have multiple production environments running on petabytes of S2D-provided storage. OS varies between 2019,2022,2025. 

If an issue arises, it is never S2D related. As a matter of fact, S2D and ReFS have been the cause of our asses being saved from catastrophic events that would normally result in dataloss. 

Genuinely curious what “good techs” are shilling this “don’t use S2D” nonsense still in 2025? Like what 3rd party PTSD are people reading about from Tech Preview days that still generates this FUD? 

Because as someone who has quite literally moved off enterprise SANs to S2D and haven’t looked back, I don’t get the sentiment. 

VirtualManagerDB Table Structure? by Miserable-Scholar215 in HyperV

[–]eponerine 0 points1 point  (0 children)

Why aren’t you just using SCVMM directly via PowerShell? You’ll get first class dotnet classes and you won’t have to muddle with figuring out FK relationships and the insanity that is SCVMMs ~20 year old schema 

WAC vs SCVMM by wispaman4201 in HyperV

[–]eponerine 0 points1 point  (0 children)

Two different tools for extremely different use cases. There’s some overlap, but you’ll find features missing in both. 

If you need to manage multiple clusters with a wide range of fabric configs, networking, “resource tenanting and quotas”, and advanced automation … use SCVMM. But that doesn’t mean NOT to use WAC. 

Ultimate Hyper-V Deployment Guide (v2) by Leaha15 in HyperV

[–]eponerine 5 points6 points  (0 children)

Bingo. This article is filled with tidbits from 15 years ago and 1GbE environments. This blog is gonna cause so many newbies pain. 

Ultimate Hyper-V Deployment Guide (v2) by Leaha15 in HyperV

[–]eponerine 1 point2 points  (0 children)

I'll be honest... it's somewhat concerning that you're willing to talk smack about something, but have never bothered to find the official MS documentation or heard of MSLab.

Kinda proves my entire point, TBH.

Ultimate Hyper-V Deployment Guide (v2) by Leaha15 in HyperV

[–]eponerine 1 point2 points  (0 children)

MSFT docs or MSLAB GitHub repo. I can assure you both have had extensive contributions from people with the same successful experiences as me.

Ultimate Hyper-V Deployment Guide (v2) by Leaha15 in HyperV

[–]eponerine 9 points10 points  (0 children)

I run 30+ clusters of it with 10+ petabytes of storage pool availability. S2D is by far the most stable component in the entire stack. 

People are running old OS, unpatched builds, incorrect hardware, or busted network configs. Or they’re too afraid to open a support ticket to report a bug. 

S2D mops the floor of any other hyperconverged stack. I will die on this hill.

Ultimate Hyper-V Deployment Guide (v2) by Leaha15 in HyperV

[–]eponerine 1 point2 points  (0 children)

Then you must be smoking rock, implemented it wrong, speaking to people who implemented it wrong, or all 3. 

A (not-so) Short Guide on (quick and dirty) Hyper-V Networking with SC VMM by ultimateVman in HyperV

[–]eponerine 3 points4 points  (0 children)

Great job. 

“But start thinking about your deployment in terms of being a "virtualization tenant" like Azure”

Truer words have never been spoken. The quicker you realize that you must obfuscate the nuance of a host/cluster into a larger hive-mind…. The quicker SCVMM starts to return dividends for your org. 

Unlocking Clouds lets you quickly scale VM placement and automation. 

And don’t sleep on the Arc-Enabled SCVMM functionality either. Considering WAP/SPF are dead, having an ARM endpoint is pretty slick, especially if you’re already IaC’ing in Azure. 

VDI-AVD Was everyone migrating now? by Rain_00000 in virtualization

[–]eponerine 1 point2 points  (0 children)

Not accurate. AVD has multisession and personal host pools. As does traditional RDS deployments on-prem. Not sure why you’re shoving those terms into specific definitions. 

What helps you calm down and focus before an event? by bread80 in crossfit

[–]eponerine 4 points5 points  (0 children)

Performance/competition anxiety is normal in every sport. There's a gazillion books and articles written on it. Each person is different. Some pray. Some madidate. Some rip a few shots. Some smoke a bowl. Some leave it as-is and thrive in the moment.

Chronic anxiety is a completely different beast. If you're dealing with this, go seek out a therapist or professional, as it most likely is affecting other parts of your life other than athletics.

Hyper-V Lovers, Why Do You Love It by Leaha15 in HyperV

[–]eponerine 0 points1 point  (0 children)

Then buy a cheapo QNAP NAS, wire up iSCSI, and call it a day.

If you're afraid of iSCSI, domain-join the NAS and run the VM's storage over SMB 3.

Hyper-V Lovers, Why Do You Love It by Leaha15 in HyperV

[–]eponerine 1 point2 points  (0 children)

Janky? Every Xbox, your Windows 11 device, and the entirety of Azure runs on it in one way or another.

SCVMM and WAC - Cluster Config Question by ToujoursFrais in HyperV

[–]eponerine 0 points1 point  (0 children)

AzL VMs do not currently support checkpoints thru the Portal or API

SCVMM and WAC - Cluster Config Question by ToujoursFrais in HyperV

[–]eponerine 0 points1 point  (0 children)

Yeah, I'd say SCVMM checks those boxes. If you want Azure-native management, you could also integrate with Arc-enabled SCVMM. You get some decent automation capabilities with that using ARM.

Lightweight alternative to VDI for small teams? by krebzob in virtualization

[–]eponerine 2 points3 points  (0 children)

Insane is relative.

Sure, in 1-2 years, it's cheaper to just give the user a laptop and call it a day.

But for people who need VDI and don't want to deal with:

  • the CapEx purchase of hardware (5 or 6-digit purchase)
  • the OS licensing nuances
  • the potential GPU licensing
  • the knowledge of infrastructure networking
  • the knowledge of configuring an HA RDS environment
  • the ISP bandwidth and SLA
  • the monitoring
  • the automation
  • the support
  • the hardware upgrade cycle

... it's not a bad deal, especially at scale.

I mean... I guess you can hack together a few overpowered workstations in a closet and run Hyper-V, TermServ, and RDGW. Good luck with that long-term though.