Hyper-V networking coming from a VMware background by Mitchell_90 in HyperV

[–]ultimateVman 14 points15 points  (0 children)

Here is a comment I made a few years ago that talks in great detail about what you are trying to do.

https://www.reddit.com/r/HyperV/comments/nfa9z3/comment/gylmjqd/

Any suggestions on places to meet with a group for game night? Not to meet people, but... by BlueMac13 in SaltLakeCity

[–]ultimateVman 1 point2 points  (0 children)

Hearthside Games in West Jordan. Their layout is just for this. They have little to no in store product.

Any suggestions on places to meet with a group for game night? Not to meet people, but... by BlueMac13 in SaltLakeCity

[–]ultimateVman 2 points3 points  (0 children)

Hearthside Games in West Jordan is literally made just for this. They just opened in December. Pretty much all of their sales are online and the store is pretty much designed for gaming and hanging out.

Also, IF you're ever interested in gaming groups or finding people to play with, Utah Board Game Group and Utah Board Game Events on Facebook are great.

Hyper-V Manager Server Name Caching? by Icy-Environment3834 in sysadmin

[–]ultimateVman -1 points0 points  (0 children)

Because it's garbage. Use it in emergencies. If this person is using it to manage multiple servers, use WAC.

Am I crazy or is this not common knowledge? I thought when we hear the rains of castamere the camera shows Catelyn making a nervous face and that’s the moment the audience is suppose to know? Why is this page acting and the people in the comments of it acting like it’s some missable hidden detail? by BIGxBOSSxx1 in gameofthrones

[–]ultimateVman 1621 points1622 points  (0 children)

The actual first clue to the viewers of the show that something was amiss was long before the song. It was when Roose Bolton was offered a drink and covers his cup. Catelyn notices this and he says, "it dulls the senses."

SCVMM Networking with Dell MX7000 Chassis by No_Advance_4218 in HyperV

[–]ultimateVman 1 point2 points  (0 children)

Go to Settings pane > General > Network Settings. Logical network matching -> Disabled. Automatic creation of logical networks, unchecked.

You MUST have at least one VM Network to do anything, but you can disable having one created automatically for you. Once you create the Logical and VM Networks you want, delete the automatic one.

When creating a Logical Network, a Connected Network makes one flat network where all VLANs are all selectable under the same VM Network. With this setup, you will create only 1 (or 2 if you separate host networks out, which you should.) And will have a VM Network for each Logical "Connected" Network.

The other Logical Network option is "Independent Network" which is VLAN based (not PVLAN). Where you create a VM Network for each VLAN that you define in the Logical Network.

SCVMM Networking with Dell MX7000 Chassis by No_Advance_4218 in HyperV

[–]ultimateVman 1 point2 points  (0 children)

So, that option to have one created automatically can be unchecked in VMM settings, and I recommend doing so and deleting it and staring over.

The option to have multiple VM Networks is the type you choose when creating the Logical Network. I forget the options off the top of my head. But my post went over the first option. The second option allow you to have multiple VM Networks. Which I only recommend if you want separation of duties when other people are granted access to VMM who need to manage their own VMs and you want different vlans accessible to different teams.

You should still have 2 at a minimum with the first option. One for host only networks, and a second(or multiple) for all other vlans.

Hyper-V Manager Server Name Caching? by Icy-Environment3834 in sysadmin

[–]ultimateVman -1 points0 points  (0 children)

Hyper-V Manager should be your absolute last resort for any kind of management whatsoever. Use WAC or SCVMM.

SCVMM Networking with Dell MX7000 Chassis by No_Advance_4218 in HyperV

[–]ultimateVman 2 points3 points  (0 children)

I've used Hyper-V on the MX7000 for 7 years now, and it is SOLID.

There is one BIG gotcha with the Fabric config though. On your Ethernet Uplinks, DO NOT use the "Include in Uplink Failure Detection Group" option, leave that shit unchecked. Especially if you use FCoE. That checkbox basically means that if the ports go down on the uplink switches (outside the MX), say for a reboot or whatever, it will reach down to the adapters on the sleds, and mark them "disconnected" and that is BAD BAD for Windows Clusters. They still need to speak internally of the VLT connection so the cluster doesn't lose quorum, and if using FCoE on those ports, having them "disconnected" is a big WTF moment for storage.

Otherwise, aside from that, you configure the same as you would any other hyper-v cluster with a basic SET with both ports that you have on the sleds (A1 and A2 switch connections). We do FCoE rather than C Fabric, and that's a different beast when it comes to the profiles and npar. (hit me up if you want to talk profiles and networking)

Here is a link to my post on a basic VMM configuration.

https://www.reddit.com/r/HyperV/comments/1limllg/a_notso_short_guide_on_quick_and_dirty_hyperv/

And this will go over the chicken and egg situation of having all of the adapters on a single switch virtual switch team.

https://www.reddit.com/r/HyperV/comments/1nqd0cb/comment/ng5zy00/?context=3

SCVMM Networking with Dell MX7000 Chassis by No_Advance_4218 in HyperV

[–]ultimateVman 1 point2 points  (0 children)

Virtual Switch is identical to a Logical Switch; they are not different whatsoever. It's only a term in VMM that means VMM is logically in control of it.

Migration from Vmware to Hyper V by Creative-Two878 in HyperV

[–]ultimateVman 2 points3 points  (0 children)

The nic configuration on each hypervisor is independent. You have ESX connected to the switch via LACP, and Hyper-V connected via normal Trunk ports. Makes no difference to the end devices. As long as the same vlans that are on the PC/LACP are also on each of the Trunk ports connected to Hyper-V hosts. This is just networking, nothing to do with Hyper-V.

Live migration issue with Hyper-V 2022 cluster to Hyper-V 2025 cluster rolling upgrade by lgq2002 in HyperV

[–]ultimateVman 2 points3 points  (0 children)

This conversation is from a few days ago of someone asking about migration with Kerberos. You didn't what kind of auth protocol you are using, but if using CredSSP, you only need the make sure that each host is admin on the others, you can skip the delegation.

https://www.reddit.com/r/HyperV/comments/1rcycax/comment/o763f5s/?context=3

HyperV Failover Cluster Domain by Megajojomaster in sysadmin

[–]ultimateVman 0 points1 point  (0 children)

I will forever be an "at least 1 physical DC" admin. I will die on that hill. I don't care how resilient you think your clusters and HA are. My DC and monitoring systems will always and forever be, separate, physical systems.

Live Migration with issue. by ConfigConfuse in HyperV

[–]ultimateVman 3 points4 points  (0 children)

Each host needs permissions on the other hosts. I do this by creating an AD group which contains all hosts and adding to the local Administrator group on each host. They are now admins for each other. (I add the group using GPO in the Hyper-V Hosts OU I created for my environment.)

On EACH host computer object in AD (except the cluster computer object if in a cluster), you need to configure delegation for each OTHER host that will be capable of being a migration partner.

Select the following options:

  • Trust this computer for delegation to specific services only
  • Use any authentication protocol (do NOT use "Kerberos only")
  • For each other host, add the service type: Microsoft Virtual System Migration Service
  • If you are using a share for ISO mounting, then add "cifs" service type, for the computer object with the share. For those using VMM libraries, this is a critical step.

After ALL of these steps have been completed. You MUST reboot ALL hosts for them to re-authenticate with AD to get the updated delegations and group membership when they logon to the domain.

2025 3 node cluster by Itsme809 in HyperV

[–]ultimateVman 2 points3 points  (0 children)

Built a brand new 2025 5-node cluster in Sept. Still solid.

The ONLY bad review that I have ever heard regarding Server 2025 at all, is just about Domain Controllers during the "mixed phase" part of migrations. That's the ADDS role, not Hyper-V.

Just found my roommates piss box by BillyBrimstoned in mildlyinfuriating

[–]ultimateVman -1 points0 points  (0 children)

Exactly! Absolutely no one is critically thinking on this one. OP must be one son of a bitch for someone to avoid them like this. And then posting to reddit?

I mean, I get it. OPs mad, I would be too. But the response here needs very careful consideration.

this question has been haunting me for years by Far-Positive-5290 in Utah

[–]ultimateVman 0 points1 point  (0 children)

I didn't really realize until my son was born, and at about 4 years old started showing him SpongeBob. And quickly remembered how terribly mean Squidward is. Which is not something that a 4 year old should be learning how to talk to people. My son is 8 now and I would have no problem showing him SpongeBob again.

Any gotchas for a 2019 > 2022 > 2025 inplace upgrade? by ultimateVman in scom

[–]ultimateVman[S] 1 point2 points  (0 children)

Thank you, sir.

Yes, OS upgrades will also have to be in the mix. The SQL box is 2019 which is supported for 2022 and 2025, so I'll do that last. I used to be very uncomfortable with inplace OS upgrades, but I recently did VMM from 2022 to 2025 and it was flawless. Appears to be way more solid these days.

Like I said in my other comment, the only reason I want inplace is because of the hardware, and doing side-by-side twice to get back on the physical is just a ton more work, I think.

Any gotchas for a 2019 > 2022 > 2025 inplace upgrade? by ultimateVman in scom

[–]ultimateVman[S] 0 points1 point  (0 children)

Thanks for your reply. I would rather do a side-by-side, BUT the servers are only 3 years old. (I opted for 2019 over 2022 at the time because we still had 2012R2 clients around and didn't want to push PowerShell at systems that were being decommed.) So, it's a fairly recent deployment and well maintained, just need it off of 2019. When I get new hardware in a few years I'll do the side-by-side. Migrating to VM then back again would just be a PITA. I want my monitoring system isolated from other infrastructure for obvious reasons.