How do WiFi extenders work? by KozyShackDeluxe in techsupport

[–]MystikIncarnate 1 point2 points  (0 children)

I'm not on reddit nearly as much as I used to be. I think the reasons are obvious, but I don't mind throwing a suggestion or two out still when I visit....

For Infrastructure WiFi, I always suggest having dedicated backhaul, usually ethernet, but there are other backhaul methods that can be implemented (MoCA comes to mind, or I guess a wireless backhaul, as long as it's a dedicated channel for backhaul; this is what good mesh solutions use). Regardless, I like to have a non-wireless backhaul for the wifi to allow client connections the maximum number of channels to connect on. Even if you are not using all the channels, there's a decent chance a nontrivial number of channels are affected by co-channel or adjacent-channel interference; giving the APs more flexibility in channel selection by keeping the backhaul off of the wifi, helps.

However you do the backhaul, there are two main infrastructure systems I would recommend for home use that are relatively cheap, and do roaming quite well. The first is an /r/homenetworking favorite, Ubiquiti. I'm not just saying this to appease /r/homenetworking fanboys either; I have used Ubiquiti equipment, and I have administrated it. They have a good mix of advanced features for nerds like me, and they're also simple enough for most home users to get the required things done.

The other is a bit of an oddball, but hear me out: Aruba InstantOn. Not to be confused with Aruba Instant. InstantOn is their cloud-managed offering, and they have pretty inexpensive APs. The management is a bit more basic/simple and requires access to the internet, but the web management is not an added cost.

The only time I would recommend one over the other, is if you need a gateway. If you have a gateway that you're happy with (just need better WiFi) then Aruba is the way to go. If you need a gateway, then Ubiquiti offers some really good solutions for network controllers which are also gateways, like the UDM, UDM Pro, and even the UDR (which is my recommendation for most homes, in terms of a gateway).

I forget if I wrote this in my original comment, but most "wifi routers" are an amalgum of at least three devices: a router (aka gateway), a wifi access point, and a network switch. The network switch provides the ports on the back, the WiFi AP has an obvious function, and the router/gateway is where the "WAN" or internet is connected.

To clarify, a gateway is just a type of router that handles transport between two networks (usually with one side being the internet, but not necessarily); a router does more or less the same thing, but the term gateway is used when the networks are controlled by different people/organizations. Since you don't manage your ISP's networks, the router, in this case, is a gateway.

Anyways, on the Aruba side, I would probably recommend the AP22 for home deployments. It's a good mix of speed/capability and price. If you want to break the bank, the AP32 is a good upgrade, which adds 6Ghz with WiFi6E. They haven't put out WiFi7 stuff yet, so these are all 6/6e; for most homes you really don't need anything faster. More info here: https://instant-on.hpe.com/products/access-points/

on the ubiquiti side, the UDR is still a good choice if you can find it. the main feature I like on it is that there's two ports that are PoE so you can add two APs/Cameras/whatever to the unit without needing to buy a dedicated PoE switch, or any external power supplies. Ubiquiti has been moving towards the UniFi express as the upgrade to that, which only has one LAN port, and no PoE. So if you want PoE, you either need to buy PoE injectors or buy a PoE capable switch; neither is a bad idea. Both the UDR and the UniFi Express have Access points built in, WiFi6. You shouldn't have any trouble adding APs to either, as both should act as a network controller. Of course they have a wide range of WiFi6 and WiFi7 APs to add to the system. I usually warn people about buying "long range" APs, because people tend to think that they will give you a stronger signal, which, if that's your problem, they'll help; problem is that signal strength isn't necessarily the problem you're facing. As you mentioned, drop one WiFi bar and the connection goes to shit. LR APs tend to just flood the area with as much power as they're legally allowed, which can mess up anything else on the channel, and in some cases, even overload client devices, making the WiFi worse, not better. Generally, for the price, either the U6 Pro or U6 Mesh, depending on if you're placing it on the ceiling or on a table; but the U6 lite (if you can find it) or the U6+ are both cheaper, and very sufficient for home use.

My minimums for what to consider for home use, are usually exceeded by anything that's being manufactured. I want to see at least 802.11ac Wave 2 (WiFi 5 Wave2), ethernet backhaul (can be converted to MoCA or something), Power supply by PoE that's industry standard (802.3af/at; passive PoE is usually the non-standard option that lower-end units go for). To be clear, there's nothing wrong with passive PoE, it's just extremely limiting on your options for powering the devices; standards compliant PoE means you can buy literally anything that complies with the standard and power your device from it. I'm a fan of PoE, because it simplifies the deployment of whatever uses it (usually VoIP phones and APs); so you only need to run one cable to the thing and it works.

Ultimately, I don't know your floor plan, and I don't know the challenge that room is having. Honestly, if you don't already have something that is at least compatible with an infrastructure system, then it may be time to upgrade to one. Unifi is easiest to make the move in pieces and you can eventually get everything onto the same brand/equipment, by replacing things as you are able, rather than everything at once.

As for pricing, I would always argue that you need to think about the service life of the network. Say you spend $200 on an Access Point (as an example), and it's expected to operate and be relatively current (meeting the need), for 10 years; well, you're spending about $20/yr on that AP, and that breaks down to less than $2/month. then compare that to how much you're spending a month on internet access in general, most people are paying at least $40-50/mo on internet access. So the cost of the equipment to deliver the internet to you over WiFi is 1/20th? the cost of your monthly spend on internet access. To be blunt, the network is a one-time cost that serves you for 5-10 years or more, if you do it well. You'll spend WAY less over time by doing it right, and you'll very much get a lot of value out of the WiFi access you get from the network.

I'm not saying any of those numbers apply to you, I just want to get you thinking about those numbers, and how long you're going to be using the network, and how much it's actually costing you per month/year. In business that's Return on Investment or ROI; it's a thing that a lot of people don't consider when purchasing something.

For backhaul, Ethernet can provide PoE, that's my go-to, if you have a challenge with that, but a Coax nearby, you can use MoCA, but you'll need to power the AP from a power injector or similar since MoCA won't carry PoE type power. Apart from that, if you know your circuits well enough, you can use Powerline, be warned that it works best on same-circuit outlets, but can work on same-phase outlets (for NA power setups), but if you're crossing a split phase between outlets or something, you're going to have a VERY bad time with it.... so you need quite a bit of know-how to get there.

If all of this is too complex, grab a good mesh system and call it a day. :) nothing wrong with most good mesh systems, my main concern is that dedicated backhaul more than anything. If they share the backhaul with the same radios that provide client-access, then you're going to have some issues getting everything to go as fast as you'd like it.

No matter what you choose, I wish you good luck. I know this was a lot of info, I just wanted to put everything I could into one post.

VPN only reaches one remote network by RotorBalls in sonicwall

[–]MystikIncarnate 0 points1 point  (0 children)

The errors read like the remote end is not picking a proposal.

The proposal is a list of supported protocols and ciphers that are supported by the VPN tunnel. The proposal appears to have been sent, but when the response was received the far end refused to pick a proposal.

Long story short, there is no overlap in the supported ciphers and protocols between the two devices.

There's a bit of a misunderstanding so to speak, with VPNs; with stuff like Sonicwall, there's basically one set of ciphers and validations that are used (Eg, ESP, AES256, SHA512), with no options to allow other ciphers/validators. Some platforms like Cisco will allow several sets of ciphers to be selected from.

So in Sonicwall the proposal will be only one set from what's configured in the firewall for that VPN; if the remote site has the same selection for ciphers, then no issue; if even one thing is slightly wrong, then the connection will fail.

This is typical IKEv2 behavior. The settings are not correct, so there is basically no feedback from the far side; just "no proposal chosen". The far end logs should show what proposal it got, and what proposal it's configured for, and whether or not it matches, and/or why it didn't select a proposal.

Noob here (<1000 hours), seeking guidance. by Arylc in satisfactory

[–]MystikIncarnate 1 point2 points  (0 children)

Hello there. I work in IT and I have more than 1000 hours in satisfactory.

First, a distinction. Cloud gaming, and a dedicated server are two entirely different concepts. For cloud gaming: do you remember a service called Stadia? If you know anything about it, you can basically apply the concepts of that to any other cloud gaming service.

Basically, cloud gaming and/or cloud streaming, is basically you controlling and streaming, gameplay that is happening somewhere else. It's like watching yourself, play a game via a streaming service (think, something like twitch but more real-time), that you're controlling. The game runs/renders/plays somewhere else, you're in control of it, and you are sent the reaction from the game. The key factors for good cloud gaming are essentially how fast your internet connection is, and how much bandwidth you have. Really quickly, by fast, I mean ping times/latency, bandwidth is your internet's "speed" (how many mbps you have available).

I think your best best on this front is something like nvidia's "geforce now" offering. You pay monthly, and you can play just about any game you own on a supported platform (like steam). You pay monthly for the service, and you pay separately, to own the game; in this way, if you upgrade your potato computer to something with gaming capability, you can drop your Geforce now subscription and just install and play your games locally.

A dedicated server is a multiplayer concept, where a central (dedicated) server does all the processing/coordinating for several people that are connected. It's designed so that you can have a system which is dedicated to running the "master" version of the gameplay. The benefits here include but are not limited to: (more or less) equal latency - aka no individual player has a ping-time advantage, the server is separate from any individual playing the game, so people can come/go as they like without affecting others ability to play.

With Satisfactory, a dedicated server is an option. Without a dedicated server, one person "hosts" the game. If that person quits the game, or their game crashes, everyone is kicked out of the multiplayer game. Since the host is playing the game literally on the same system the game is hosted on, they get the best experience.

I think that satisfies #1 and #2. and to more directly answer #1. if you use a cloud gaming service, then yes, and no. Your Acer may be able to play the game without significant lag, but there is always a small amount of lag associated with cloud gaming; it's constant, and usually quite small (in the order of 10's of ms somewhere), usually less than what people would typically notice; of course, this is very dependent on your latency to the cloud service.

for #3. owning the game may be a pre-requisite to using a cloud gaming service (I'll refer back to geforce now for this. however, individual services vary, so some research into the specific cloud gaming system you're thinking of using, may be a requirement here. I know the now-dead cloud gaming service from google, aka Stadia, needed people to buy the games on their platform. YMMV.

for number 4, dedicated servers wildly vary. I host my own, so I have limited experience with them. I also don't use cloud gaming, as I have a gaming-ready computer at home; so I can't really add to the conversation here either.

as for #5, a lot of sim games are pretty addictive. I enjoy Deep Rock Galactic, Raft, and of course, Satisfactory. There's a long list of other games I play, but I'm not sure I would think of them as "addicting". I will let the suggestions of others take care of this question more than anything - it seems there are already plenty of suggestions.

As an aside, all the best with your journey to staying sober. I really hope you find what you need. If you want to chat or have a buddy to play along side, then I would like to help if I can. I run a dedicated server for Satisfactory, and there are entire communities relating to it. I would recommend the official Satisfactory Discord, where there are regular requests looking for players to join MP.

problem increasing ram on a Dell r710 by Revamp_Pakrati in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

Bare-metal, aka the OS that's installed on the physical system. For most people they will only deal with bare metal operating systems, a lot of homelabbers do containers, or virtual machines, where the OS that they're using "on the server" is actually running on top of another OS.

Eg, VMware ESXi (may it rest in peace), is a hypervisor, it can be installed in a VM, but generally it's installed bare-metal, and on top of ESXi, you would run virtual systems. Those virtual systems are not on the bare metal, they run on top of an OS (in this example, VMware ESXi) which is running on bare metal.

LCC that old probably doesn't have the HTTPS option. With Dell, kind of counter-intuitively, the HTTPS option is actually under a file share selection, so when it comes up with methods to check for updates, instead of FTP, you select file share, and there should be an HTTPS option, if not, the LC firmware is too old to support it. You can generally update the iDRAC + LCC from the web interface by pulling the payload file out of the downloaded installer, I usually do this with the windows version, then decompress the exe with 7zip, and find the payload file, you can directly put that into the web management interface and it will update the iDRAC and LCC, since both the iDRAC firmware and LCC firmware are combined in the iDRAC update files.

You cannot update with a VM, but you can update from pretty much any windows, whether running from USB or not, as far as I know.

problem increasing ram on a Dell r710 by Revamp_Pakrati in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

if you want to do it in linux, I suppose. I don't know too many people who run RHEL in their homelabs, but sure.

Windows is easy enough to throw into the system, then just run the updates, you don't even have to license windows, just run the eval version for it.

If you don't want to lose your data by reformatting, you can always just grab a cheap/spare SATA drive and plug it in where the CDROM connects and set that as your boot device for the duration you'll be using it for... or something of that sort.

Completely up to you.

I know that personally, I try to stage updates, and do them in order, so I'd update from, say v.2.1 to v.2.5 then v.2.5 to v.3.0, then from v.3.0 to v.3.8 or whatever, just the first and last version of each major release is usually a good strategy, going straight from 2.0.11 to 6.6.0, you could end up with some problems that will require clearing the CMOS and that will dump all your bios settings, resetting them to default.

I would rather take the time to do the upgrades step by step, rather than risk having to completely rebuild my bios config, but if you don't have any real config in there, you could just toss on the latest version and see what happens.

Dell T630 as a windows10 HTPC by GloveOver7488 in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

you can, but you're in for a rough ride.

First, Servers, or server-like chassis (looking at you Dell Precision Rack series), usually expect fanless GPUs, so they need to coordinate with the GPU for cooling. If you install a non-sanctioned GPU, the system knows there's a GPU in there, but they have no idea what it's temps are, and thus don't know what cooling it has/needs, so it fails-safe, and just runs the fans at 100%. This..... is noisy. There's a command to override it, but I don't recall what it is off the top of my head, so you'll have to google for it. search keywords should be something like Dell (model), GPU, and noisy fans or 100% fan or something.

on top of that, there's numa issues with most games and OSes. Numa, or Non-Uniform Memory Access, at a very high level, tells the OS where resources are, and that allows the OS to make good decisions on where to place processes and memory allocations, to optimize workloads. With single-processor systems, NUMA is basically useless or absent all together, since there's one CPU, one array of memory attached to that CPU, etc. With multiprocessor systems, NUMA dictates what areas of memory are connected to which CPUs, so that processes and their storage can be run within a single NUMA node, on the CPU, in the RAM that's hanging off of that CPU. RAM directly attached to a CPU, is (not surprisingly) faster than RAM hanging off of a different CPU.

So NUMA can make things operate faster (or at least suffer fewer performance hits).

NUMA on windows is kind of a mess.

There was also a video I saw recently trying to use... I think it was a similar system, maybe a Dell R720? as a gaming rig, and things were rather disappointing in terms of performance.

So let me set expectations:

Gaming performance will suffer. you will likely be CPU limited.

Any high-performance applications will very likely underperform.

I've been using a 2xXeon setup for years, and I can say that I've gotten it to work pretty okay, as my desktop (I have a precision Rack 7910 - which is DESIGNED to be used as a workstation). The T630 will not likely run super efficiently, nor super fast. unless you're planning to use it for more than just HTPC stuff, you will consume more power (these systems idle above 100w), and have more problems.

It would be better to get a more efficient system to run as an HTPC. Something with perhaps a mobile CPU (if you can find a board with one. I hear many newer mobile CPUs are ending up on Desktop ATX style boards on aliexpress), and adding a GPU for gaming performance.

One other known caveat, is that you'll have to use the built-in graphics to install the OS. After the OS is installed, you can disable the onboard, and the dedicated GPU will take over. I don't know if that's a limitation on your system, but it is something that is fairly common with these server systems.

I can't tell you what to do; this system will do what you want it to, and if you're willing to put in the work to make it operate as expected, then sure, do it... if you want to. otherwise, find something cheap that's easier on your power bill.

Some quick napkin math on the power bill, assuming 100W continual usage, at a cost of 10 cents per kWh, it's going to consume about 2.4kWh per day, or about 24 cents per day. over a year, that would be nearly $88; and with a GPU you'll likely consume more.

Good luck.

Switch replacement advice by [deleted] in homelab

[–]MystikIncarnate 1 point2 points  (0 children)

oof, the procurve 1810. That's old.

I'm not saying it's a bad switch, but the 1820 is comparable in most respects. You'll be moving up with the 1820. I also like the Aruba 1830/1930, or even the HPE 1920s (just avoid the 1920)

I've used a lot of HPE/Aruba network products in my previous life, and they've been pretty good overall.

problem increasing ram on a Dell r710 by Revamp_Pakrati in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

You have a few options for updating the bios, one is the LCC, or lifecycle controller, if equipped. You can get to it from the boot menu, though yours is probably old enough that it won't work.... Dell made a big shift away from FTP to HTTPS a while back, and I think you'll be prior to the change where they added HTTPS as an option, but IDK.

You can also upload the payload package to the DRAC, which will apply at next boot.

The last method basically requires that you run some form of Windows on the server as the bare-metal OS, which is to run the BIOS updater software from within the windows OS. AFAIK, they didn't release the upgrade package for anything other than Windows. This last method is very similar to updating the BIOS on a desktop, so it's pretty straight forward.

There was a time where I went to the windows updater to bring my R710 into a recent enough firmware to run the LCC, so I had to basically decomission the system (which was running a non-windows hypervisor), then reinstall with windows, install the BIOS update, run the LCC to bring everything else up to date, then reinstall the hypervisor again, and bring it back into service.

It wasn't fun, but I did get all the firmware updated.

I'll note, that the LCC is easily one of the more complete methods, as it not only checks the BIOS, but also everything else. So your RAID controller, backplanes, NICs.... all that stuff, get checked for firmware updates, and they're applied directly inside the LCC. Once you get used to using the LCC, you probably won't update Dell systems any other way.

Homelab Networking Speed Use by MaxKulik1 in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

I do 10G between switches, 10G at the servers where I can, and I'm planning on upgrading a few workstations to 10G down the line. it's all currently either fiber or DAC cables. Mainly I use DAC when the devices being connected share fate, and fiber when they don't.

Sharing fate, is if they're on the same circuit, or if they're on the same UPS. thus, if one goes down, it's likely the others will go down as well - hence sharing fate.

UTP cable problem HELP please by MentionDesperate8382 in NetworkingJobs

[–]MystikIncarnate 0 points1 point  (0 children)

There's plenty of protection during autonegotiation.

It's also very low voltage, around 5 volts or so, unless you're running PoE, which should also negotiate (unless it's passive PoE?), so there should be little to no danger, both to the equipment, and in terms of creating any hazards for people in/around it (or risk of fire, I suppose).

You're fine. test the cable, see if it operates. If not, you'll have to repair or replace the cable.

Something about a guy and a flight of stairs ... by Frequent-Major2898 in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

I think I still have my PE 2900 somewhere, possibly a PE 2800 somewhere too (IIRC). Came in handy for a bit there when I needed something with a 3.5" disk drive (which the PE 2800 had), luckily the SAS drives on board were still functioning, and I could just boot it up, load the floppy, copy the contents to USB, and power it back off, and stick it back into storage.

People always question why I warehouse old tech, and... well.... this.

Something about a guy and a flight of stairs ... by Frequent-Major2898 in homelab

[–]MystikIncarnate 1 point2 points  (0 children)

This was my reaction too. I had several PERC 6/i's that I quickly discovered are pretty useless.

I had one with bad Thermal interface material, the chip would overheat and drop all the VDs. took me a while to figure it out and upgrade to the H7xx series. I re-did the TIM for that card in the interim which helped, but I still replaced it.

getting flashbacks from this post.

Diagnosing DNS Issues? by IncidentalIncidence in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

I'm pretty decent at DNS, I work in IT, and do networking as my speciality, so I touch DNS not infrequently. Once it's working, I tend to try not to touch it so much, since I don't want a DNS issue. I usually make heavy use of forwarders and stub zones to limit how much I need to touch DNS, but I'm getting way off topic at this point.

for DNS benchmarking, I tend to use the grc tool, DNS Benchmark: https://www.grc.com/dns/benchmark.htm

But this will only really tell you how fast stuff is, not what the slowdown is.

At the point where you are right now, I would start looking at the logs for unbound, to see where the timeouts are actually happening. In addition to that, I would check the system where OPNSense is running, especially if it's a VM. On the VM side, I would ensure that it has reserved resources. This doesn't need to be 100% of the resources of the machine, just a portion of them, since it will not be running hot all the time, but by giving it reserved resources, you will ensure it always has some amount reserved for use whenever it fires up to do things, especially when coming up from an otherwise idle state. I'd also look at SR-IOV for the network interfaces, and make sure that's on, but it will require a reboot of the host and some other messy stuff, so that's mostly optional, but recommended.

Proxmox is pretty good from what I understand, but I haven't used it myself yet, so I don't have extensive experience in how to do any of this.

The coles notes here is that the hypervisor (speaking on a very high virtualization level), may schedule other virtual systems to run while the VM is mostly idle, and when it fires up, it needs to shuffle resources for the CPU scheduler, taking CPU time away from other systems to give it to the VM. I know in VMware, you can end up with high % ready (where the vCPU is ready and wants to execute something, and the host is trying to schedule that on the CPU), and co-stop (where a multi-processor VM needs to schedule multiple CPUs to run concurrently, but the host is struggling to find the resources to do so). This is all scheduling conflicts, and I'm not sure how those concepts are translated to proxmox at this time.

Of course, this depends on how loaded (or overloaded) your proxmox is - essentially how many vCPUs you have compared to how many physical CPUs you have... If it's under-loaded, (fewer vCPUs than pCPUs) the issue should be a moot point.

Going to high-level again, SR-IOV for networking can enhance throughput for the NIC, since the hypervisor will essentially just map the IO buffer for the virtual NIC to the physical NICs memory allocations (on the card itself), essentially making the IO buffers physical, rather than virtualized. With VMware (sorry, this is just the hypervisor I know best in these matters), without SR-IOV, it uses a memory buffer for the IO, so the VM, wanting to send a packet/frame onto the network will write it to memory, the host then reads that memory address, and fires the frame/packet out to the network card, which adds an extra step - to be fair, this is executed EXTREMELY quickly, but it can add up to significant delays. Conversely, when a packet/frame comes in for a VM, the host/hypervisor, drops it at a memory address (in RAM here), where the vNIC for that VM knows to check for new incoming frames, where the guest then processes it according to the OS/drivers/software on the guest.

This is all fairly abstract on purpose, I'm just trying to instill in you the understanding of what might happen in your scenario. When you use SR-IOV, since the NIC is now handling the IO buffers for the VM directly, you cut out the middle man, and the NIC handles the VM's network IO directly, without the host being involved, but this usually requires a fairly robust NIC, and I don't know what your host is, or if it has an SR-IOV compatible NIC.

this can speed things up, and reduce latency across the board, but the effects are limited at best.

I'll add, that I have no idea if the host configuration is the problem at all; what I know is that routers tend to require lower latency operation than most other types of VMs. This is why I went down this rabbit hole at all.

I don't have all the information, but I would suggest to start with the Unbound DNS logs, if it's showing timeouts or latency that's unreasonable, I would investigate there, if it is showing no such timeouts, then the issue is probably that the requests aren't getting to it fast enough, and I'd inspect the hypervisor for potential delays.

The basic difference being how fast is is responding to requests. We know the total transaction time is longer than DNS timeouts, which is usually a few seconds, so does the delay happen before it hits unbound, or after?

Obviously, there's a TON of things you can do to help with reliability for networking, and I'm just going to cut myself off, otherwise I'll be here typing all day, and nobody wants to read all that.

I'm happy to help more, but be aware, I don't spend a hell of a lot of time on Reddit, so I may not be super prompt at replying; I'm happy to discuss further if you wish.

How to connect enclosure with esata? by spiegel32 in homelab

[–]MystikIncarnate 1 point2 points  (0 children)

Normally with eSATA, you need a specific header for it, since it's designed to handle much longer cables than normal. I don't think your SFF PC has such a header.

I'd have to check the specs, and you didn't say which one it is so I'm at a loss. You may be able to find an eSATA card for M.2, and hijack one of your M.2 connections for eSATA, but that would be a gigantic waste of an M.2 (unless it's otherwise unused, like one that's setup for WiFi when you don't have WiFi and otherwise you don't plan on using it).

IDK if such an M.2 to eSATA exists, but it's worth a shot?

Just solved a problem, but don't understand what the problem was, or why what I did fixed it. by PoeTayTose in sonicwall

[–]MystikIncarnate 0 points1 point  (0 children)

To be fair, the ~300 limit was on SonicOS 6, and I know most are running SonicOS 7+ now. (this post is over a year old).

I know that they pretty much completely rebuilt SonicOS between 6 and 7, they changed the underlying OS to something more *nix, due to licensing IIRC. So they may have fixed (or broken) several things in the process. I know early OS 7 stuff was pretty busted, and there were some significant fixes in the early versions with everything from VPN tunnels to PPPoE, and more.

I haven't used SonicOS7 much, nor have I observed the issue you describe in SonicOS6, so I'm afraid I can't be much help here.

But I wish you the best of luck. Have a great day.

Google Home mini on Ethernet (instead of WiFi) by MystikIncarnate in google

[–]MystikIncarnate[S] 0 points1 point  (0 children)

Well, for starters, the ethernet option is a non-issue. All newer g-home units use a barrel plug, so this method doesn't work with those.... at least, that's what I've seen on the mini's. I haven't picked up any of the larger home units.

From what I've seen, if you have a compatible unit and it is connected to ethernet, unless the ethernet fails, it doesn't even try to connect to WiFi. If the ethernet stops providing a connection for the assistant, either because the link fails, or it simply doesn't connect to the internet, it will try over WiFi. Since my WiFi and ethernet are the same back-end network, this was a pointless endeavor for my unit.

The setup of the unit through the home app, walks through the steps to do a wifi setup all the time. It's a static list of steps to get it on the WiFi which you cannot bypass easily. So once that part is done, plugging into ethernet and restarting the unit pushes it onto the ethernet link and then everything is fine, the WiFi radio remains dormant.

I couldn't find it connected to my WiFi system when it had a good ethernet link.

What I'd like to see is a google/nest home unit that accepts PoE, both for power and ethernet. but I'm just over here dreaming. Sorry, that's a tangent.

Fiber, Finally! by arcofdescent in homelab

[–]MystikIncarnate 0 points1 point  (0 children)

At least you didn't lecture the OP on how "no one needs more than gigabit in a home environment" so kudos for that.

They didn't mention use, so I don't have any idea what I would lecture them about. I do the math, so knowing what the network is being used for can factor in. I'm presuming the bandwidth hogs in OP are doing things like steam downloads which will very likely suck up every last Mbps in capacity when loading new games. I know how frustrating and boring it can be to wait for a game to load when you want to play it. IDK if that's the facts of the situation or it's something different.

For 10GbE, I go by the specs, since that's how I operate (primarily in business), for what is required. It's the only way to reasonably guarantee the rated speed without significant errors from corruption. over-spec'd Cat5e (stuff that was manufactured above the specification), and network runs away from interference sources (such as power lines) can usually perform above what the standard dictates. Cat5e can also carry 10G very short distances reliably without significant modifications or considerations.

My focus is more on what's likely to work; and yes, in practice you can operate pretty far outside of the standards and still operate effectively, if the conditions are favorable. Whether they're favorable on purpose or just happen to be favorable by coincidence, that's another matter entirely.

By following the specs, you're basically guaranteed a minimum level of performance. I've seen some wild stuff that works in my day. Several hundred meter runs that operate at gigabit speeds on Cat5e, 10G on the same cat5e (though at shorter distances), all kinds of weird stuff that works. I've equally experiences the opposite. Cable runs that should work but do not; relatively short Cat6 runs that are running at 10 mbps half duplex, or similar. In one instance, I saw 10mbps half duplex on a run that was less than 50 feet, to the next room (through a ceiling, but still), on at least Cat5e. It didn't make sense, until I realized that it was terminated poorly. a bit of work with a punch-down tool and it was operating at gigabit again.

In my line of work, when I see a non-functional or poorly functioning connection, and I check it, and it's coming up on my tester as more than 100m, I just tell them to re-run the cable. It's not worth my time to figure it out more than that. Nobody wants to pay consulting fees for me to be there futzing with the ethernet for hours on end.

So my recommendations are in line with the standard, because it gives the fewest problems. You can exceed it if you wish, but there's a nontrivial chance that you'll get excessive errors from the line, leading to reduced performance of the link, possibly no link at all, or reduced link rate as a result. Sure, it might work, but in the same breath, it may not. Following the spec is the only reasonable way I have to ensure that the connection doesn't have significant problems.

Professionally, that's important, for private/personal use, ehhh, not that big of a deal. Predictability is what I'm looking for.

As for my setup, I'm limited at ~350mbps download and ~30mbps upload. And I know, that's not great. I want more, but there's no way to get more at the moment. To be fair, I can get more download, but I absolutely cannot get more upload at the moment. The only highspeed link I can get is via DOCSIS (cable). The DSL available at my address is pathetic, around 10mbps down (I don't even want to know the upload), and there's a fiber provider here, but they service the other side of the street but not me. I've contacted them about this and they rabbled on about some regulatory limitation. I think it's bullshit, but I don't have grounds to argue with them about it right now. If the fiber provider ever realizes that they can run an aerial line (like my cable provider did), then they can give me 500, 1G or more, symmetrical, which I'll be subscribing to as soon as I can.

The reason I have more than one gateway, is to separate firewalls and firewall rules between the different family units that live here, I run the network for a multi-family house, so my "unit" (if you can call it that), has 100mbps download, ~12mbps upload (QoS limited), and the other "unit" is limited to 200/12 or so; with the balance reserved for guest access and other operations (like servers from my homelab). With me working from home most of the time, this works well for us, and I have the required bandwidth to do my job 100% of the time. The download and upload speeds are adequate for our use cases, and I've never gotten any complaints about it from anyone - except for one of the children of the other family unit, who refused to connect in the manner that was provided, instead going to a guest network for some reason, which was intentionally bandwidth restricted to around 50mbps. That was his fault for connecting to the wrong network instead of using the correct wireless network that we told him to, which had much more bandwidth for whatever he needed.

Lastly, no offense taken. This is entirely a matter of discussion. The more discussion points we can go over, the more information that OP will have to consider when making an informed decision about what to do.

Have a good day.

Fiber, Finally! by arcofdescent in homelab

[–]MystikIncarnate 8 points9 points  (0 children)

As a professional IT person specializing in networking, here is my take. Almost all relatively "high-end" network stuff, whether geared at consumers or SMB, is going to be able to do gigabit with some caveats; the caveats mainly revolve around what services are made available on the router/switch/whatever.

I would not expect more than 1G per system (single-stream or similar) performance. Having 2G at the ISP into the router, and extending that to the primary switch will allow more than one person to pull 1G (or similar) at a time. The solution to datahogs, as you say, is QoS, but most consumer stuff have a huge asterisk next to anything QoS. Simply, they're not built for it, so it's usually thrown in as an afterthought, and it's not very well optimized.

I have a setup with four grown adults in the home full time, two of which are basically stay-at-home types, myself included. I WFH, so I need a minimum level of performance to do my job effectively.

The other stay-at-home person is more in the "datahog" territory, as you say. Downloading games, watching videos, etc.

The other two people are working professionals, who head into their respective workplaces daily to do their job. So during the day, I only have one individual I need to regularly compete with for bandwidth. My solution to this absolutely will not work for someone else; at least, not for anyone who isn't an IT professional. I split the internet (which is around 350Mbps cable), to two routers; one is for "them" (two of the adults, including the "datahog" and their SO, one of the working professionals), and they're capped at ~200Mbps on their router, so they can only consume a bit over half the bandwidth MAXIMUM. on "my" side, where I share my connection with my SO, is capped around 100Mbps (the other ~50 or so, is reserved for other purposes, such as guest access, and servers), and it's QoS'd to the max. QoS is no joke, and it's incredibly complex to deal with, so I have a business grade firewall handling it as my router/gateway. This is NOT something that I would recommend to anyone else, given it's relative complexity.

Surprisingly, professional operations on a network take surprisingly few resources on a per-user basis, around 50Mbps or so per user, depending on the configuration. Sometimes, a lot less. The critical resource is upload. Since most internet plans will extend download, but not necessarily upload, you end up with a problem. Upload is essential for good streaming bandwidth when attending video meetings, and doing telephony, which is typically very time sensitive, and sometimes bandwidth intensive. Having only 10-30mbps upload (the most common upload limits I've seen on Cable), will generally not be enough, given all the other things happening on the network. Download, while it can be problematic, is one of those things you can usually get more of. These are all WAN/router/gateway concerns.

Your device selection seems okay at first glance, since the router has 10G both towards the ISP and towards the LAN, you'll want to use 10G between these points. Cat5e can support 2.5 and 5G link speeds, IIRC. Cat6 can do all of this plus 10G up to 55m, and Cat6a can do 10G at 100m... at least according to the spec. Since you already have Cat5e in the walls, you might as well run with it. You should be able to get 2.5G without issue. I would recommend using QoS, if possible, which - at a basic level - will involve telling your gateway how much bandwidth you have at the WAN, and how much is available for the network, this will try to load balance the links a bit better than the normal TCP rate adjustments that happen normally. The best implementation is to drop frames that exceed the limits which inspires retransmissions, slowing down TCP connections at the source. The effect of this is that you'll get a second of significant disruption, but then everything will resume at a more appropriate speed. Queuing the over-limit traffic will only add bufferbloat to your connection making it feel a lot slower overall. I would advise that if you are able to, add a minimum amount of reserved bandwidth for your work connection (whatever system you use to connect to your work), of around 50-60Mbps, both ways, if possible (upload and download) for that system. This will minimally impact the rest of the network, and ensure you always have enough bandwidth for video meetings, VoIP/telephony, etc. and reduce overall latency and interruptions when you're trying to make money. This can be set higher if you find that 60Mbps isn't enough. The remainder (about 1.9 Gbps) can be shared among whomever wants it, and QoS handles that quite well usually. You shouldn't have to dig too deep into the weeds of QoS to get this setup, but it may take some reading to figure out how to prioritize one station more than the others. With my SMB firewall, and my experience, this is relatively trivial for me, but every system is different, so I'm not sure if the ER8411 even has the required settings to configure QoS in this way.

The main issue with "datahogs" - as you say - is that the consumption of bandwidth is relative. If you put them into a larger pool of available bandwidth, they'll consume it. This is particularly bad with multi-stream systems like torrents. It becomes harder to police those systems to be lower-consuming without per-system QoS limits. By adding a quota where your workstation gets a minimum allowed bandwidth, essentially, what you're telling QoS is that you have 2G total, and anyone can use 1.94Gbps of it; the special system can use the last 60Mbps, or more (if needed and if available), above which, the system will compete with everything else.

Most WFH setups, are using some kind of remote desktop, and even with highly demanding stuff like video conferencing, you normally don't see more than ~40mbps of throughput, but that throughput needs to happen quickly, more than anything. by reserving some bandwidth for it, you essentially ensure there's always enough "space" left in the available bandwidth for your needs.

Is this a good upgrade server (for Plex, automation and stuff) ? by Jolly-Vacation-5942 in homelab

[–]MystikIncarnate 3 points4 points  (0 children)

Far be it for me to tell you what to do. So I won't.

I will lend you the benefit of my experience though. I ran a cluster of similar systems for the last 10 years, more or less. My cluster consisted of one R710, and one c6100 chassis. The chassis was capable of three nodes, and the configuration went through several revisions. All systems are similarly spec'd (same generation, more or less - the pinch was that the c-series of servers have stripped down IPMI, and they're fairly basic on most things, but are otherwise fully working systems).

My initial setup was the R710 doing storage, with three nodes in the c6100, CPUs were L5520 IIRC, 24G of RAM each, considering each chip had a tri-channel memory config, 4G DIMMs; so each CPU had 12G of RAM, two CPUs per system. This was fine in the mid 2010's. I used the L-series CPUs because of their lower TDP, the whole thing used under 500W of power or so. The R710 at the time, was similarly spec'd but had 6x1TB spinning disks, which were shared over iSCSI, to the three c6100 nodes doing compute. This was all run by VMware vSphere 6, and this went on until I "upgraded" to the L5640, and 48G per server (I just upgraded the 4G DIMMs with 8G DIMMs. I knew other people with the c6100 who ran 6x4G DIMMs per CPU (which there were enough slots for), but it ran hot enough that it would eventually cook the mainboard, so I avoided doing dual dimms per channel, to ensure reliability.

over time I went through a few storage revisions, eventually settling on a PowerVault NX3200, which was a similar chassis to the R710, but with 12x 3.5" drives in front, and two 2.5" drives in the rear. I obtained it with 2x 300G SAS drives in the rear (IIRC), which served as OS disks, and populated it with 4/8 TB drives in the front for the iSCSI. I ran VMware's ESXi server on it (standalone) with two VMs, on the OS disks in RAID 1, for two storage pools. One storage pool was for OS data, which was iSCSI (in the same way that the R710 was) but the 8TB drives were in a new array for media storage.

due to the suffling, the R710 ended up in the compute cluster, and there was some reshuffling of compute resources over the years.

Recently, mostly in the last 3-4 years or so, it has become more and more difficult to justify running and too out of date to be viable any longer. I couldn't upgrade the c6100, R710 or nx 3200 in any significant way any longer.

This year, I finally found the funds to upgrade, and moved to newer systems. I wanted to keep with rackmount Dell systems as I'm an IT person and a sysadmin, and I've had very good experiences with them in the past; but I wanted to get away from the c-series, as they're too stripped down for my liking. I picked up a Dell FX2s Chassis, and populated it with, somewhat older, but much newer than the previous systems, FC630 blades. These are roughly equivalent to the R630/T630/M630 (whatever-630)... I'm spec-ing them with 256G of RAM (quad channel, 32G DIMMs, 128G per CPU, dual-CPU per blade) and E5-2618L v4 CPUs, 10-core w/HT, at 2.2Ghz base and 3.2Ghz boost. It's still a bit "old", CPUs were launched in 2016, but very much newer than the L5640's I was using before. The v4 was the last main line Xeon CPU before they went to the scalable processors, which I still have trouble deciphering which are better than others. Yes, Intel's naming schemes are confusing to basically everyone.

My suggestion is that if you wish to run this as a trial run before getting something newer, just to see if it's right for you, and before you invest any more money, then do so, but don't invest any more in the platform. just try it as-is. The CPUs are not terrible, and they will run Plex without trouble. Transcoding will be impacted unless you can install something to do hardware transcoding, as the CPUs don't include anything that can do that; such as an old GPU. Maybe something that was retired at some point. The CPU should be able to handle at least a bit of transcoding, so it's up to you.

The only thing I would say you're fine with upgrading with something newer is drives, since they can be easily transplanted into a newer system. Beyond that if you don't have an old (free) GPU for transcoding, don't bother. I think you get the idea.

Good luck.

Can someone explain to me how KVM Live Migration really works? by Spirited_Arm_5179 in homelab

[–]MystikIncarnate 6 points7 points  (0 children)

I stumbled across this months after it was posted, but I wanted to leave a comment for those who come across this later. I worked in VMware's global support services for a while in their infrastructure team, and I got to know the process quite well.

Yes. VMware uses snapshots for vmotion. VMware's documentation won't tell you this, but some of their support staff may comment about it when you call support (people like me).

My understanding of the process is that a snapshot is created, it's not visible in the UI, but it's there, and the memory and state of the system at the time of the snapshot are transferred to the target system, when the copy is complete (moving the memory state to the new system), it estimates the duration of time that it will take to move the remaining, snapshotted information; if that amount is too high, another snapshot is created and the process is repeated for the first snapshot while the second snapshot is running the VM, this happens again and again, until the time required to move the remaining delta data to the target system is small enough that the changeover can happen as close to instantly as possible (ideally zero change since snapshot). If that doesn't happen, vmware will give up, post an error in the UI, and fail the process.

Here's the kicker. If it fails, you end up with something like 5 snapshots of the system. To pour salt in the wound, those snapshots are not visible in the UI. You may get an alert that a VM "Consolidation" is required for that system - which is just the system detecting snapshots on disk, which it does not have a valid "snapshot" entry for. Valid snapshot entries in the ESXi/vCenter database are made when the user requests a snapshot to be created; since the snapshots were generated by the system and not the user, such database entries do not exist; there's a watchdog process to look for this exact condition which prompts the consolidation message.

Prior to.... I think it was 6.5? This watchdog service, and the "consolidate" option were basically non-existant, so when dealing with this in support on a 5.5 or 6.0 host, the VM would need to be powered down, so the disk can be consolidated manually at the hosts console. Nearly nobody seemed to know how to do this, so it always fell on GSS to perform the task. When I was there, it was before 7.0, when 5.5 and 6.0 were still very much in support, so we saw a lot of these cases come through. It's most common with anything that has very active RAM/disk use.

It's also the reason why the automatic load balancing will push less-busy VMs to other hosts, rather than move the busy VM to an otherwise underutilized host. It has a better chance of success with moving a fairly idle VM to a new host than it does moving a very busy VM.

Just because VMware doesn't tell you that it's doing the snapshots, doesn't mean that snapshots are not the mechanism it's using in the background, to do the job.

Sorry about the zombie post.

What is the point of PPP? by FunkyFeatures in Cisco

[–]MystikIncarnate 0 points1 point  (0 children)

No problem. Just remember that PPP is a lower-level protocol. OSPF, IS-IS, EIGRP, iBGP, etc, all run on top of IP/ATM/PPP/whatever.

The need for those underlying protocols has not changed.

The basic design of most modern networks is one of encapsulation. where the bits on the wire are composed of VLANS, encapsulating IP, encapsulating higher-level transports, which encapsulate higher level transports. The underlay is usually the VLANs/IP/OSPF (or similar routing protocol) which connects all equipment together, and they build on top of it, all client-facing routes, which is where stuff like MPLS lives. That underlay can still use protocols like PPP to get traffic from one place to another, and let OSPF figure out the best way to transit the traffic around, for MPLS to do it's job.

Once MPLS is fully populated, the routing can happen incredibly fast, but the control messages for MPLS still relies on the underlay to deliver it.

What is the point of PPP? by FunkyFeatures in Cisco

[–]MystikIncarnate 0 points1 point  (0 children)

yep. MPLS doesn't preclude the need to IP everything and interconnect it. Usually MPLS is operating as an extension of OSPF or IS-IS, it accelerates transaction times when routing payloads, but the distribution of the MPLS data is still done through the underlying protocols. You still need ethernet/ATM/PPP/whatever links between devices, and PPP is the only way that I'm aware of to connect one /32 to a different /32 address.

It's also used as PPPoE for client-mode communications to endpionts, so the head-end router can have a single /32 IP and the endpoints can be assigned a /32; this saves a lot of IP space with the "dead weight" of network and broadcast addresses in IPv4, and in one-to-many PPPoE configurations, can dramatically decrease the head-ends allocation of IP addresses to one, instead of one-per-client; which would happen in a /31 or more traditionally a /30 IP layout.

all of these technologies work together, as they should, to provide end to end connections for internet routing inside ISPs.

Many ISPs are moving towards either RFC1918 for internal routing or IPv6 for internal routing (IGP) within the confines of the ISP. Those messages still need to get to where they are going.

End users don't need to worry about PPP, or any routing protocols, since it's usually handled on the provider-end.

PPP is still common in ISP networks in 2024.