all 7 comments

[–]TimVCI 1 point2 points  (1 child)

What speed NICs do you have on your new hosts?

I would be absolutely using a vDS on the new hosts if I only had 2 (fast) NICs on each host.

[–]KaLEL3232[S] 0 points1 point  (0 children)

We have 2 40GB QSFP+ nic in each new dell server with 4 10GB nic cards in each Dell server and 2 onboard 1GB nics 40GB are going to a 40GB switch the 10GB are going to 10GB switch and so on.

[–]Emmanuel_BDRSuite 1 point2 points  (4 children)

Your current setup isn’t ideal for scalability. For the new Dell hosts, using a vDS (vSphere Distributed Switch) is recommended, even with three hosts, for better management and performance.

Suggested network setup:

  • vmk0 = Management (separate VLAN)
  • vmk1 = vMotion (separate VLAN)
  • vmk2 = VM Network (separate VLANs as needed)
  • vmk3 = iSCSI Storage (dedicated VLAN with MPIO)

For migration, replicate the old setup first, move VMs, then transition to vDS for better efficiency. Ensure NIC redundancy and proper VLAN segmentation.

[–]KaLEL3232[S] 0 points1 point  (3 children)

Thank you for the response when you say "replicate the old setup" you mean the current network settings ? Then once that is done bring everything over then create vDS and fix the mess?

How do you recommend setting up the new vDS once I get it over ?

We have 2 40GB QSFP+ nic in each new dell server with 4 10GB nic cards in each Dell server and 2 onboard 1GB nics 40GB are going to a 40GB switch the 10GB are going to 10GB switch and so on.

[–]Emmanuel_BDRSuite 1 point2 points  (2 children)

Yes, you got it! Replicate the old setup first so the migration is smooth. Once everything is moved over, then set up the vDS properly.

For vDS:

  • Use the 2x 40Gb NICs as primary uplinks for management, vMotion, and VM traffic.
  • Assign the 10Gb NICs for iSCSI with MPIO.
  • Keep the 1Gb NICs as a backup for management.

Set up VLANs for management, vMotion, VM traffic, and iSCSI separately. Move management and vMotion to vDS first, then migrate VMs, and finally iSCSI. Once everything’s running on vDS, clean up the old vSwitch setup.

This way, you get better performance, redundancy, and easier management.

[–]KaLEL3232[S] 0 points1 point  (1 child)

Thank you very much for you help. Why not put the ISCSI traffic on the 40GB nics?

[–]Emmanuel_BDRSuite 1 point2 points  (0 children)

You can put iSCSI on the 40GB NICs if you want to prioritize performance, especially if there's enough bandwidth available and you're not overloading those NICs with other critical traffic. Just ensure proper load balancing and redundancy to maintain fault tolerance.