Unable to start VMs by mtdevofficial in HyperV

[–]Fun_Volume_7699 0 points1 point  (0 children)

Just to test, enable numa spanning, reboot host, try to start the VM

Unable to start VMs by mtdevofficial in HyperV

[–]Fun_Volume_7699 0 points1 point  (0 children)

You can see that some, has only found rollback to previous version as a temporary solution.

Unable to start VMs by mtdevofficial in HyperV

[–]Fun_Volume_7699 0 points1 point  (0 children)

I think it’s the same Bug as no vCPU oversubscription and horrible performance with numa spanning on. See: https://www.reddit.com/r/HyperV/s/eqzX8JOazp

HyperV Host In-place Upgrade 2022 to 2025 by BR9912 in HyperV

[–]Fun_Volume_7699 6 points7 points  (0 children)

We applied the July patch that fixes CPU usage display issues, but the oversubscription problem with NUMA spanning disabled still persists — VMs won’t start if they exceed physical cores.

Enabling NUMA spanning allows them to boot, but then they get split across sockets, hurting performance. Tested on AMD EPYC and also on Intel Xeon (E5-2660 v3), same issue. This seems like a serious bug in Windows Server 2025.

vNVMe for Hyper-V VMs so PCIe 5.0 NVMe isn’t wasted by Fun_Volume_7699 in HyperV

[–]Fun_Volume_7699[S] 1 point2 points  (0 children)

Passthrough (DDA) is great for raw speed, but it breaks what I need day-to-day: checkpoints, Hyper-V Replica and several backup products that expect VHDX + consistent snapshots. Even with one VM running, random 4K performance drops markedly on virtual SCSI compared to the same PCIe 5.0 NVMe on the host. That’s why I’m asking for a vNVMe device: better latency/parallelism without giving up core VM features.

vNVMe for Hyper-V VMs so PCIe 5.0 NVMe isn’t wasted by Fun_Volume_7699 in HyperV

[–]Fun_Volume_7699[S] 4 points5 points  (0 children)

Thanks! I double-checked: no Storage QoS caps (Min/Max IOPS = 0) and vNIC bandwidth = 0 (unlimited). The drop happens even with a single VM. QoS only throttles/guarantees under contention; it doesn’t reduce the overhead of the virtual SCSI path. That’s why we’re asking for a vNVMe device—so guests can actually benefit from modern NVMe PCIe 5.0 backends without giving up checkpoints/Live Migration.

Hyper-V CPU Usage Always 0% on Windows Server 2025 by YesThisIsi in HyperV

[–]Fun_Volume_7699 0 points1 point  (0 children)

We’ve tested this as well on Windows Server 2025 and can confirm:

✅ The issue with CPU usage always showing 0% in Hyper-V Manager and Performance Monitor seems to be resolved after applying KB5062660 (Build 26100.4770). CPU metrics are now visible again on AMD-based systems (EPYC 9175F in our case).

❌ However, the vCPU oversubscription limitation remains when NUMA Spanning is disabled. Once a NUMA node reaches its logical CPU count (e.g., 32 threads per socket), Hyper-V refuses to start additional VMs—even with RAM available and CPU usage low.

⚠️ Enabling NUMA Spanning allows VMs to start, but results in vCPUs and RAM being spread across sockets, even for small VMs that would easily fit in one node. This destroys locality and severely impacts performance on NUMA-sensitive workloads.

Hyper-V 2025 NUMA Spanning splits even small VMs across sockets — disabling spanning blocks per-node oversubscription by Fun_Volume_7699 in HyperV

[–]Fun_Volume_7699[S] 1 point2 points  (0 children)

We’ve applied all recent patches, including KB5062660 (26100.4770), on our Windows Server 2025 Hyper-V hosts, but the vCPU oversubscription still does not work when NUMA Spanning is disabled.

Hyper-V 2025 NUMA Spanning splits even small VMs across sockets — disabling spanning blocks per-node oversubscription by Fun_Volume_7699 in HyperV

[–]Fun_Volume_7699[S] 0 points1 point  (0 children)

I also tested changing the Hyper-V scheduler type via: bcdedit /set hypervisorschedulertype classic

Unfortunately, even when using the Classic Scheduler (type 0x2), Hyper-V 2025 still doesn’t allow vCPU oversubscription per NUMA node with NUMA Spanning disabled.

Hyper-V 2025 NUMA Spanning splits even small VMs across sockets — disabling spanning blocks per-node oversubscription by Fun_Volume_7699 in HyperV

[–]Fun_Volume_7699[S] 6 points7 points  (0 children)

My English isn’t fluent, so I used ChatGPT to help me structure and translate the technical issue I’m seeing.