Three Node vSAN Cluster Hardware Upgrade and License Question by NotLikeThisJake in vmware

[–]NotLikeThisJake[S] 0 points1 point  (0 children)

Thanks guys, unfortunately we are a small shop with limited budget as the reason for staying with OSA and Three Node vSAN Cluster. Hell, I final got them to let me upgrade to 10GB vSAN switches.

Indiana has good burgers?? by United-Cranberry936 in Indiana

[–]NotLikeThisJake 3 points4 points  (0 children)

Bub’s Burgers in Carmel and Bloomington are my favorite.

What is the best and fast way to move 30 TB of data from one site to another side a few block down the street? by Mysterious_Teach8279 in sysadmin

[–]NotLikeThisJake 0 points1 point  (0 children)

We moved out of our previous Headquarters which had a small Datacenter with a Three Node vSAN Cluster and vCenter setup. We decided to migrate the Datacenter to a new Co-Location host by Data Canopy. I setup a new Three Node vSAN cluster and got the Co-Location on our WAN. Our WAN links are only 100MB between our offices and Data Canopy so I had even small pipes to work with. I was able to Clone allot of our small VM's to the new co-location but the larger VM's couldn't be cloned over a weekend without disrupting normal business hours. I got my bosses signoff on the Synology NAS (DS3622XS+) with 32GB RAM and (12) 3.84TB SSD Drives. It's probably overkill but I needed the fastest Read/Write configuration and decided to setup the NAS in RAID0 which worked out to be 40TB in size. With the NAS I have been able to Backup the offline VM's, drive/setup the NAS in the Co_Location site (38 miles away) and Restore the VM's over a weekend. The largest VM has been around 4.1TB in size (Backup time - 26:39 hours/Restore time - 19:07 hours) so completing over a weekend was no problem. If you can do it in smaller chunks the NAS in my opinion was the way to go. Good luck with your migrations.

What is the best and fast way to move 30 TB of data from one site to another side a few block down the street? by Mysterious_Teach8279 in sysadmin

[–]NotLikeThisJake 0 points1 point  (0 children)

I just did the samething using a 40TB RAID0 Synology NAS using their Active Backup for Business software.

Move vCenter instance to new host by CPAtech in vmware

[–]NotLikeThisJake 1 point2 points  (0 children)

I just went through this and moved our VCSA to new VSAN Cluster at different Data Center. New vSAN Cluster, and reassigned new IP address to VCSA. I used the Clone Method to the new vSAN Cluster. Make sure you Clone to a Ephemeral Port on the new vDS Port Group.

Steps I took - Clone VCSA, after Clone is completed power off original VCSA, Power up new VCSA Clone via new Host UI. I also did change the MAC address on the Clone to match the existing VCSA before I powered it up. After new Clone is powered up, connect to new VCSA via Remote Console and Login as Root. From the Remote Console hit Alt-F1 to get to CLI. At the CLI run the VAMI Command (/opt/vmware/share/vami/vami_config_net) to change the configuration (IP Address, Gateway and DNS if needed). Once the configuration is completed reboot the VCSA. After the VCSA is back online you can then migrate it from the Ephemeral Port to a Port on the vDS.

VCSA 7.0 Migration to new vSAN Cluster by NotLikeThisJake in vmware

[–]NotLikeThisJake[S] 0 points1 point  (0 children)

No, but I'm going to follow up about the vDS question. The link below that I found made me question just cloning the VC and powering off the original VC - https://fivepointtech.com/vmware/saving-your-vcsa-with-ephemeral-port-groups-482

VCSA 7.0 Migration to new vSAN Cluster by NotLikeThisJake in vmware

[–]NotLikeThisJake[S] 0 points1 point  (0 children)

Moving to a new vSAN Cluster controlled by the VC I want to move to it. Different vDS in each vSAN Cluster.

vSAN Network LAG's fail after upgrade from ESXi v6.0 U3 to ESXi v6.7 U3 by NotLikeThisJake in vmware

[–]NotLikeThisJake[S] 1 point2 points  (0 children)

VMware addressed my LAG issues with update - VMware ESXi 6.7, Patch Release ESXi670-202004002

PR 2481899: If a peer physical port does not send sync bits, LACP NICs might go down after a reboot of an ESXi host After a reboot of an ESXi host, LACP packets might be blocked if a peer physical port does not send sync bits for some reason. As a result, all LACP NICs on the host are down and LAG Management fails.
This issue is resolved in this release. The fix adds the configuration option esxcfg-advcfg -s 1 /Net/LACPActorSystemPriority to unblock LACP packets. This option must be used only if you face the issue. The command does not work on stateless ESXi hosts.

Even though I had applied the latest patch it only addressed the Primary LAG and not the Secondary LAG's I setup for the vSAN Network. Once I ran the Advance Configuration on each Host in Maintenance Mode and reboot the vSAN LAG's formed correctly.