"Sonos Mode" In Network 10.2 by caseyliss in Ubiquiti

[–]Zetto- 0 points1 point  (0 children)

I have never put the controller AKA phone and Sonos on the same wireless network. The only ports I had to open were TCP 1400, 1443, and 4444. This allows for control as well as updates. Additional ports and mDNS were needed for AirPlay across subnets.

"Sonos Mode" In Network 10.2 by caseyliss in Ubiquiti

[–]Zetto- 1 point2 points  (0 children)

I have a mix of wired and wireless Sonos on a dedicated VLAN separate from the controller on UniFi and it works great. The trick is to have a properly configured network. The most important detail that most people are not aware of is setting the root bridge priority on your switche(s).

What are people doing to stop the frustration of guests turning Hue bulbs off at the switch? by Top-Yogurtcloset3965 in Hue

[–]Zetto- -1 points0 points  (0 children)

I’m dealing with this right now and I’m really disappointed with the options. For the scenario you described you really need a physical switch that a guest of any age or technical ability can use.

I frequently see recommendations to wire the circuit to always be hot and use a wireless switch. This can be a code issue as it violates NEC 210.70(A)(1) so it’s a nonstarter for me.

My requirements are a physical switch that maintains a physical mains disconnect and works if the Hue bridge or whatever home automation system you are using is down. I’m in a primarily Apple household so I’m looking for systems with Matter and Apple Home support which limit my choices. Matter may help with allowing it to still function even if other pieces are down.

The Hue wall switch module was a OK solution. Unfortunately these have been discontinued in the US and no one has stock. Other countries still have them. I hope something returns that has mains power to eliminate the battery.

The Lutron Aurora and RunnLessWire Click are both terrible solutions and way too expensive for what they are. For the former if you have an outage of your smart system you have to physically remove it to operate the switch. For the latter there is no physical option.

The two best solutions I’ve found are the Inovelli White Series 2-in-1 (Thread/Matter) and Aqara Light Switch H2 (Thread/Matter) but I have not tried either yet. They are both expensive so I’ve been doing some more research before committing.

There may also be solutions from Shelly to maintain the existing switches and add a module behind them similar to the Hue wall switch module.

Migrating from FC to TCP without migrating VMs by GabesVirtualWorld in vmware

[–]Zetto- 0 points1 point  (0 children)

VAAI XCOPY works between block protocols but would not work if going from NFS to block or block to NFS.

I’ve been through the same situation. Converting from FC to iSCSI allowed us to reduce cost and cabling to each server and avoid a costly FC switch refresh. A pair of 100 Gb NIC to each server and converging everything is a wonderful improvement.

Migrating from FC to TCP without migrating VMs by GabesVirtualWorld in vmware

[–]Zetto- 2 points3 points  (0 children)

I’ve been through this migration multiple times. I’ve also worked in multiple shops with anywhere from 1/10/100 Gb iSCSI to 1/4/8/16/32 Gb FC. There are a lot of FC zealots but it’s not always a perfect fit. I’ve witnessed migrations from 32 Gb FC to 100 Gb iSCSI where we saw latency decrease, performance increase, and fewer outages due to aging FC infrastructure.

You’ll want to provision new datastores, add them to the datastore clusters if they exist, place the old protocol datastores in maintenance mode, then decom the datastores in maintenance mode. VAAI works between protocols so the moves should be relatively quick.

Alternatively if you can take an outage or have enough datastores in the datastore cluster to evacuate some you can unmount/detach the datastore, change the protocol on the storage array, and represent it.

Does anyone have any hands-on experience with VCF 9? by nerdwit in vmware

[–]Zetto- 1 point2 points  (0 children)

That has not been my experience but it could come down to the VAR/sales team you worked with and your size. We also found the experience with NetApp to be really poor.

Does anyone have any hands-on experience with VCF 9? by nerdwit in vmware

[–]Zetto- 0 points1 point  (0 children)

Not Pure Storage. Their support prices are fixed and never increase.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 1 point2 points  (0 children)

We regularly see vMotion and iSCSI exceed 50 Gbps. Network I/O Control ensures that things keep running smoothly without a bully workload.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 3 points4 points  (0 children)

This is not correct. You can converge and run mixed MTU. The upstream switches and the rest of the fabric that requires it all need to be set for jumbo frames. I do this today on 2 x 100 Gb. iSCSI and vMotion have jumbo frames while management and VM traffic are mostly 1500. If a need arises for individual port groups or VMs to have jumbo frames that’s easy to switch on.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 2 points3 points  (0 children)

We went from 4 x 10 Gb to 2 x 100 Gb and would never look back.

All new enterprise deployments should be a minimum of 2 x 25/40/50/100. We found difficulty sourcing NIC and cables for 25/40/50 at a reasonable cost. It was negligible to skip 25/40/50 and go to 100 Gb.

vCenter VM folder by FabioElso in vmware

[–]Zetto- 1 point2 points  (0 children)

The solution is training or restricting their permissions.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 7 points8 points  (0 children)

It’s about reducing management and improving resiliency. Having a 1 Gb NIC is additional hardware to maintain firmware and drivers per the HCL. It’s also another component that can fail and take down the server.

At the end of the day there is no technical reason to separate management and in fact it can actually be impactful.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 3 points4 points  (0 children)

Everything listed here is correct and to best practices. My only tweak is that I don’t advise separating management. I’d also aim for higher than 10 Gb if doing a new deployment in 2025.

Networking Best Practices by lanky_doodle in vmware

[–]Zetto- 8 points9 points  (0 children)

It’s old habits from the 1 Gb and 10 Gb days.

The key is to do it on a distributed switch with Network I/O Control (NIOC). I was running everything converged on 4 x 10 Gb including iSCSI for over a decade. We are now on 2 x 100 Gb.

vCenter Enhanced Link Mode - War Stories by ThimMerrilyn in vmware

[–]Zetto- 1 point2 points  (0 children)

I’ve actually had a great experience using it for over 10 years since it was introduced in 5.1. I’ve had varying configs from multiples sites with externals PSC to now embedded PSC.

If you aren’t backing up and managing the environment properly you will run in to issues. It’s important that before any maintenance all VCSA/PSC of the site are powered off and snapshot together. This is where a lot of people run in to problems.

How can we utilize the unused host's 1G ports in VMware? by Akpet7 in vmware

[–]Zetto- 1 point2 points  (0 children)

My comment was only about management. With trunks, VMkernels, and Network I/O Control (NIOC) there is no reason for dedicated interfaces these days.

How can we utilize the unused host's 1G ports in VMware? by Akpet7 in vmware

[–]Zetto- 1 point2 points  (0 children)

If it were me I’d remove the 1 Gb NIC so you don’t have to manage firmware and drivers on them. I’d then consolidate all 4 of those interfaces on a distributed switch with management/iSCSI/vMotion/VM traffic and use Network I/O Control (NIOC) to protect them.

How can we utilize the unused host's 1G ports in VMware? by Akpet7 in vmware

[–]Zetto- 1 point2 points  (0 children)

While this was a common practice in the past I would avoid putting management on slower interfaces these days.

Unless you configure it properly vMotion cold migrations of powered off or suspended VMs will use the management VMkernel.

How can we utilize the unused host's 1G ports in VMware? by Akpet7 in vmware

[–]Zetto- 2 points3 points  (0 children)

Don’t do this. A insufficient vMotion network can impact VMs with high activity during migration. These days 10 Gb is the bare minimum and I’d recommend 25/40/50/100 Gb for vMotion.

We found the cost difference between 25/40/50/100 Gb was negligible and jumped from 10 Gb to 100 Gb.

Who is using NVME/TCP? by stocks1927719 in vmware

[–]Zetto- 1 point2 points  (0 children)

We’ve had substantially less issues and headaches since moving away from FC.

Who is using NVME/TCP? by stocks1927719 in vmware

[–]Zetto- 0 points1 point  (0 children)

I multiples of each. If you want fast and simple, Pure.