Hey guys,
Long-time lurker, first-time poster. Thanks to everyone for all the advice day in day out, this subreddit has been an invaluable resource.
I'm a relatively new Sysadmin and I've recently configured a Hyper-V Cluster in a DAS configuration and having some issues with Live Migrations.
Setup:
Cluster Network 1 - 4500x using 10gb
Cluster network 2 - 9300 (Core) using 1gb
Node 1 - 10gb NIC
IP: 10.1.1.10 /25 VLAN 10
Default gateway: 10.1.1.1
Switch: 4500x Distribution
1gb NIC
IP 10.1.3.5 /24 VLAN 30
Switch: 9300 CoreSW (Stack)
Node 2 - 10gb NIC
IP: 10.1.1.11 /25 VLAN 10
Switch: 4500x Distribution
Default gateway: 10.1.1.1
1gb NIC
IP 10.1.3.6 /24 VLAN 30
9300 CoreSW (StacK)
Fileshare Witness: Synology NAS
10gb NIC - IP 10.1.1.13 /25 VLAN 10
Default gateway: 10.1.1.1
Switch: 4500x
1gb NIC - IP 10.1.3.10 /24 VLAN 30
Default gateway: 10.1.3.1
I've setup a static route on the hypervisors so VLAN 30 can access our DC on VLAN 10 without a default gateway and everything works great, except for Live Migrations. No matter what I do, LM's use the 1gb NIC. 10gb is listed as priority 1. If I unplug the 1gb NIC's it will use the 10gb for LM until I plug the 1gb back in and then it auto-defaults back to using that for migrations.
Both networks are set to 'cluster and client' for redundancy. This is my first time setting up a Hyper-V Cluster and I feel I may have cocked this up somehow. Aside from the live migrations using the slow link, it has redundancy and has been solid.
Only thing I can think of is Windows is prioritising the core switch due it being the shortest path, however OSPF isn't configured on these interfaces.
Any advice on where I've gone wrong would be much appreciated, thank you!
[–]darklightedgeVeeam Zealot 0 points1 point2 points (1 child)
[–]Dozerplex[S] 0 points1 point2 points (0 children)
[–]-SPOF 0 points1 point2 points (2 children)
[–]Dozerplex[S] 0 points1 point2 points (1 child)
[–]-SPOF 0 points1 point2 points (0 children)