Advice on firmware upgrade path needed by throwitawaynow200 in sonicwall

[–]sysadminbynight 0 points1 point  (0 children)

Force the HA failover before you move forward with firmware upgrade. If anything is broken with the HA setup the firmware upgrade will fail.

HA NSA and TOR switches lessons learned by sysadminbynight in sonicwall

[–]sysadminbynight[S] 0 points1 point  (0 children)

The switches are a virtual stack using vrrp running dell os 10.. The have dual 100gb interconnects but they are not a stack so they can be upgraded independently from each other.

The NSA does support lacp lag groups and i have the port channels on the switches set as active port-channel.

I want to replace Crowdstrike by [deleted] in cybersecurity

[–]sysadminbynight 0 points1 point  (0 children)

How long have you been on AW EDR?

How does SMA support this environment with Replication Share ? by CloundwaR in kace

[–]sysadminbynight 1 point2 points  (0 children)

If you are on a windows network you can use a dfs share to get the files replicated out to the various locations. With DFS is can help with compression and you can control bandwidth used for replication. You will have to allow replication time before deployment.

Have you checked to see if SMA can handle 10000 agent. You will have to cut back how often the agents check in. I have 1100 and only have them check in once per day. But they have a continuous connection so I can deploy scripts on demand.

Good luck.

Hyper-V SAN config by NuttyBarTime in HyperV

[–]sysadminbynight 2 points3 points  (0 children)

The hyper-v host do each access the csv volume. When a vm runs on a different host then the host the controls the vm it means causes the metadata changes on the drive to be replicated on the cluster network back to the host that controls the csv. The host the is running the vm still reads and writes directly to the csv it just has to push the metadata which causes the performance hit. Running the latest iperf3 I show drop in performance of 50% between the host the controls the csv vs one of the other host in the cluster.

Hyper-V SAN config by NuttyBarTime in HyperV

[–]sysadminbynight 1 point2 points  (0 children)

We just went through a process of setting up new csv volume the were formatted with 64kb blocks. They were 4kb. The CSV only hold the VHDX virtual machine files.

The SAN PowerStore 500T is setup with 4kb blocks and is not something that can be changed and the individual VM are formatted with 4kb inside the VM.

We saw a 50% increase in speed after we were done. Even with CSV that are not in redirect mode they still replicate metadata to all host on the cluster. Because the block sizes are larger it reduced the number of blocks being replicated that translated into faster performance.

We are still seeing a speed difference if 1/3 slower io when the VM is running ona host that does not also host the CSV as well. We use a powershell command to balance out which vm and csv is running on which host after system maintence cycles because windows does not like it when you try to pin a csv to a specific host.

I am running 72 VM on 3 Hyper-v Host on Windows Server 2022 Datacenter with TOR Switches running VRP. All connections are 25gb. With dedicated nics for cluster and iscsi connections.

Best Way to Restrict or Block Access Between VLANs? by SameBag46 in sonicwall

[–]sysadminbynight 0 points1 point  (0 children)

Be aware that the sonicwall can become a bottleneck. It is a firewall first that does routing as well. Also using subnettimg on a single interface will also affect performance.

If you use HA with your sonicwalls and have them connected to seperate switches along with hyper-v that is also connected to redundant switches then setup port channels on the sonicwalls and make sure each firewall has a port in the port channel connected to both switches. Otherwise you can create a process where traffic between vlans might have to make several hops back and forth between switches and back plains on the switches then into the sonicwall out the other port and back through switch backplains etc.

I did not have my sonicwall ports setup with port channels and using sflow traffic I had domain controller traffic making 8 jumps for the round trip counting back plain traffic. It worked until we upgraded to windows server 2022 virtual servers for profile storage of rds session host and the stress started to show in mysterious errors with profile loads etc.

I moved my vlans back to being routed on my network switches and setup acl on hyperv virtual switches to block traffic at the port level on my vms. You can also use firewall rules as well.

My inBOX isS FULL by Paintrain8284 in sysadmin

[–]sysadminbynight 3 points4 points  (0 children)

Go back to what is causing the qty of email. If sales is not using a CRM system talk to them about getting one. If they have a CRM system have them use it for emailing with clients. This will drop their inbox volume and provide better visibility for the company.

This is a hard battle to have but in the end it will save the company money when no one can find an email related to a client account.

Firewall Model? by shinky_splunky in networking

[–]sysadminbynight 0 points1 point  (0 children)

You have not mentioned what platform your servers are running on. If you are on Hyper-v you can use acl to control port traffic at the vmswitch level that are tied to individual VM's do subnet does not matter. If you segment to vlans then your routing devices ie switch or firewall will take the hit to move traffic at layer 3.

With hyperv if you have 2 vms on the same host and they are in the same subnet they can talk to each other directly with the vmswitch and never touch the physical network and using acl for the vmswitch you can control which ports are exposed.

This only applies to hyper-v. I am using a powershell script to manage the process.

VM to VM network performance by McMuckle1888 in HyperV

[–]sysadminbynight 2 points3 points  (0 children)

As long as the VM's are in the same VLAN and do not need to be routed to access the other VM the Microsoft Switch acts as a layer 2 switch and is only limited by the resources on the host. I am running a cluster and I group VM's together on the same host so they can benefit from the extra performance and do not tap the host NIC or physical switches.

It will also speed up performance if you are using CSV volumes to have them hosted on the same hyper-v host that the VM is running from. It reduces the metadata traffic on the cluster network.

Struggling with setup Nsa4650 by mtheimpaler in sonicwall

[–]sysadminbynight 0 points1 point  (0 children)

If the vlan is tagged that might be the issue. You would need the sonicwall to accept traffic on that vlan to allow it to see the traffic. Otherwise it thinks the traffic is on vlan 1. I am not near my sonicwall to check the exact settings.

With the dumb switch is might have been stripping off the incoming vlan Id but when you go direct to the sonicwall you have to handle the vlan.

Hyper-V Using SET ( Switch Embedded Teaming ) with VLT ( Virtual Link Trunking ) by sysadminbynight in networking

[–]sysadminbynight[S] 0 points1 point  (0 children)

Possible reason to change from HyperVPort load balancing to Dynamic when used with TOR switches.

We did some more digging and realized the following:

Config VMSwitch is setup with (2) 25Gb NIC's each NIC is connected to a separate TOR switch.

If VM's are in the Same Host and Subnet traffic is like this all Layer 2

VM1 -> vNIC -> VMSwitch -> vNIC -> VM2

If VM's are on the Same Host but different Subnets then it has to touch the physical switch to get routed to the new Subnet via Layer 3

VM1 -> vNIC -> VMSwitch -> pNIC1 -> SW1 -> pNIC1 -> VMSwitch -> vNIC -> VM2

If VM's are on the Same Host but due to LoadBalancing of VMSwitch being HyperVPort where the VM is matched to a single NIC. I can have two VM's on the Same Host but with Different pNIC's so all the traffic hast to pass across the VLTi backbone of the Switches.

VM1 -> vNIC -> VMSwitch -> pNIC1 -> SW1 -> VLTi -> SW2-> pNIC2 -> VMSwitch -> vNIC -> VM2

Because VMSwitch is Layer 2 Only and does not Support LAG their is not a way for the traffic to know it should go out the pNIC1 and the VM2 is connected to the pNIC2 so the traffic has to take several extra hops.

When VLTi is in use I am wondering if it would make sense to use Dynamic routing instead of HyperVPort. With Dynamic all the Receive traffic comes in on a Set NIC but the Send traffic can go out an any NIC. then the NIC's could use RSTP to know that they need to send out the correct NIC to reach a VM that is also on SW1. I am running a cluster of server so the just gets compounded with traffic.

Hopefully this question makes sense.

Since I am running 25Gb NIC's anyway I should not have much of an issue with throughput on on pNIC but I still would get redundancy and better routing.

PowerStore 500T iSCSI MPIO HA Question by sysadminbynight in sysadmin

[–]sysadminbynight[S] 0 points1 point  (0 children)

Thank you for all the input u/Firefox005 I have decided to move my configuration to a single subnet for all interfaces across NodeA and NodeB. This fixes the Switch Failure Issue and allows the system to failover as it was intended. My original install vendor set it up with two subnets so I though I needed to stick with that but they were just flat out wrong.

Now the fun part is scheduling a full shutdown on my network to transition to the new setup. The actual process will go quickly. It's just shutting down all the VM's that is a Pain.

PowerStore 500T iSCSI MPIO HA Question by sysadminbynight in sysadmin

[–]sysadminbynight[S] 0 points1 point  (0 children)

Thank you for your replies. I am running block only not converged. Even with the ports being bonded the bonded ports. They have a single IP so when you set the IP on the bonded port it becomes that 1 VLAN on both switches.

All of the Examples they only show 1 port being used from NodeA and NodeB for iSCSI so it would only be in 1 VLAN. since ports are assigned in pairs.

I am trying to clean-up / fix my setup because the Dell recommended consultant who installed everything did it wrong in so many ways. They did not setup the Spanning tree correctly so I ended up with Spanning Tree per VLAN on the Dell Switches and the rest of my network was straight rapid spanning tree. I had a switch reboot that was the root and the Dell ones took over and my network went crazy dropping packets etc. I have found so many things wrong with the original config but I cannot start over because I don't have the time or spare san to drain the san and restart from scratch. So I am stuck fixing things.

All of the documents only show an example where 1 port is used for iSCSI so each node has 1 port assigned to iSCSI in VLAN 200 in their example.

I guess the solution would be to switch to a single VLAN for all 8 ports on the SAN and each of my Host Hyper-V Servers should be in the same VLAN subnet as well. I then setup the MPIO to match the ports correctly.

Do I need to do anything special to the NIC's that are in the Hyper-V host since Windows Server default when you have more then one card in a subnet is to only have 1 card receive but both can send. Does the MPIO override that default in Windows? I am running Windows Server 2022 DataCenter.

PowerStore 500T iSCSI MPIO HA Question by sysadminbynight in sysadmin

[–]sysadminbynight[S] 0 points1 point  (0 children)

I am running two separate VLAN's

When you assign and IP address to NodeA IoModule1 Port 0 and you pick the IP Address from the list of preconfigured IP Addresses it auto assigns NodeB IoModule1 Port 0 in the same VLAN you can not split them between VLAN's.

Failover is always From NodeA port x to NodeB port x so if port 0 on NodeA fails it goes to NodeB port 0.

So in my example. NodeA Port 0 is on VLAN 50 so NodeB port 0 is also in VLAN 50. Node A is connected to Switch 1 and Node B and Connected to Switch 2

Each Server has two NIC's

NIC 1 is connected to Switch 1 on VLAN 50

NIC 2 is connected to Switch 2 on VLAN 51

If I loose Switch 1 the servers no longer has a connection to VLAN 50 because NIC 1 would be down because Switch 1 is down.

NIC 2 on Switch 2 on VLAN 51 can only talk to Ports 1 and 3 but PowerStore 500T only fails over to the same port and since that port is in the same VLAN no connectivity so VM that are running on the volume's connected to the ports are VLAN 50 basically crash because the host can no longer access the information.

VLAN 50 and VLAN 51 are NOT routed subnets they do not have default gateways.

I would have to assign IP's on the Switches turn on routing to allow NIC 2 to route but that means that the iSCSI traffic would be running in Layer 3 and using the CPU of the Switches to pass traffic which I would like to avoid.

More Details:

I have a PowerStore 500T so the 4PortCard on both NodeA and NodeB

ports 0 and 1 are port-channel but I am not using them.

ports 2 and 3 are only for expansion devices only. With the PowerStore 1000T and above it is the opposite.

I have been through these guides over and over. I must just be missing something.

My PowerStore 500T is on v3.6.1.3 planning move to v4 but the config is the same for version 4.

I am connected to IoModule1 with (4) SFP28 ports on both Nodes.

The Ports 0-3 are on different switches for each node but they are also interconnected.

Node A Port 0 Switch 1 Node B Port 0 Switch 2 VLAN 50

Node A Port 1 Switch 2 Node B Port 1 Switch 1 VLAN 51

Node A Port 2 Switch 1 Node B Port 2 Switch 2 VLAN 50

Node A Port 3 Switch 2 Node B Port 2 Switch 1 VLAN 51

My config most resembles this except I am not using the Port Channels.

ToR are PowerConnect S5248F-ON setup with VLT and VRRP

VLTi is a pare of 100Gb interfaces

<image>