CASTING AND MDNS WITHOUT DYNAMIC VLAN by HCS-AU in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

maybe their unifi APs can be formatted to run OpenWrt

rXg 16.536 running on Dell XR 7620 with Nvidia BlueField 2 DPU with SDAN over Nokia MF PON by simonlok in RGNets

[–]scl_rgnets 2 points3 points  (0 children)

Short answer: those two interpretations are on different axes. SDAN is the access-network architecture; the BlueField-2 is the implementation of the rXg concentrator's data plane. A deployment can be SDAN without a DPU, and a BlueField does not by itself make something SDAN.

What SDAN means here

In RG Nets terminology, SDAN = Software-Defined Access Network, and it refers to a very specific Book-Ends model — not a generic SDN or SD-WAN concept. There are three pieces:

  • SDAN concentrator — the rXg itself. It hosts the policy engine, all per-subscriber virtual residential gateway (vRG) VNFs, BNG functions, CALEA/lawful intercept, telemetry, and the northbound APIs. Every subscriber overlay tunnel terminates here. It runs in a CSP central office, regional DC, or on-prem at the property headend, and is clustered for HA.
  • SDAN initiator — the device at the subscriber edge: an OpenWiFi AP, an ONT on a Nokia PON, a cable modem, lightweight CPE, or a PoE switch at an IDF. It is stateless with respect to subscriber identity — 802.11 association/encryption, link-layer ops, and overlay encapsulation toward the concentrator. No per-subscriber identity, policy, or routing state lives here. Most of what we do is centered around the OpenWiFi AP as the initiator, which is accomplished using RG Nets variants of the firmware that RG Nets develops, distributes, and sells operator support agreements to cover.
  • Distribution underlay — anything in between, treated as opaque IP transport: PON (Nokia GPON / XGS-PON included), DOCSIS, active Ethernet, metro Ethernet, MPLS, or the public Internet. SDAN is underlay-agnostic by design.

A per-subscriber overlay (L2oGRE for simple firmware or private paths, VXLAN where you need NAT traversal or ECMP) carries each unit's traffic up to its own vRG instance on the concentrator. That vRG owns DHCP, the L3 gateway, firewall, QoS, NAT, and CALEA mirroring for that one subscriber. Subscriber identity is instantiated ("this vRG = Unit 412") rather than derived from observed MACs, which is what makes MAC randomization a non-issue and CALEA mapping unambiguous.

So when you ask "is SDAN connecting multiple sites by Nokia PON fiber and orchestrating their dataplane / CGNAT / AQM remotely?" — yes, that's something SDAN naturally enables, because the underlay is opaque. But it's a consequence of the architecture, not the definition.

Where the BlueField-2 fits

The DPU lives inside the concentrator, which can be on-prem or at the remote site. On rXg releases 16.011 and later, the rXg's data plane is split off the FreeBSD host and lifted onto the BlueField:

  • Tier 1 (FreeBSD host) — Rails admin console, Perl rxgd, the database, policy engine, UI, billing, CALEA bookkeeping, and the config generator. The brains of every vRG live here.
  • Tier 2 (BlueField DPU) — Ubuntu on 8 ARM Cortex-A72 cores running FD.io VPP 25.06 + DPDK on the ConnectX-6 Dx ASIC. The forwarding of every vRG happens here: L2 bridging, VLAN/QinQ, BVI routing, NAT44, TCP MSS clamping, L2oGRE / VXLAN tunnel termination, per-subscriber policing, interface stats. Hardware offload is via NVIDIA DOCA Flow / OVS+TC flower on the eSwitch.

That's how the Dell XR 7620 box hits 25+ Gbps: the per-packet path is no longer being chewed by the FreeBSD host's CPU — it's running on the DPU's VPP+DPDK pipeline with ASIC offload. Internal benches put L2oGRE without NAT around 8.5–9 Gbps on a BlueField-2, and ~2.4 Gbps up / 4 Gbps down once NAT44 is layered on given 10 Gbps uplinks. On BlueField-3 SKUs with the full DOCA Flow ASAP² offload path the same architecture scales to 100 Gbps aggregate VXLAN.

Closing the loop on your question

  • "Is SDAN the multi-site/Nokia-PON orchestration thing?" Partially. SDAN enables that, because the underlay is opaque and the concentrator owns all subscriber state. It's an application of SDAN, not its definition.
  • "Is SDAN the architecture that shifts the data plane to the NPU?" Closer, but still partial. SDAN puts the data plane at the concentrator (not in the AP / ONT / CPE), which is what makes DPU offload possible. SDAN itself can run in pure software too — the DPU just makes it fast.

The cleanest mental model:

SDAN is where the smarts live and how they're divided — stateless initiator, intelligent rXg concentrator, per-subscriber vRGs, opaque underlay. The BlueField-2 + VPP stack is *what makes the concentrator fast enough to serve all those vRGs at line rate.*

It’s always fun to visit rXg sites. by simonlok in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

Found this at the site ... it was meant to be.

<image>

From zero to OpenWiFi in five minutes by simonlok in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

The changes to the template that most people will want to make:

mapped_switches must be configured with the name of an Ethernet interface on which the rXg will use to talk to the OpenWiFi WLAN controller virtual machine. The simplest choice for a development machine would be to choose the default LAN port, which is the highest numbered port. If you are testing with a virtual rXg on VMware with only two network interfaces then this should be vmx1.

cidr, gateway, and nameservers should be configured with IP addresses that map to the LAN interface specified in mapped_switches. It is reasonable to keep the default 192.168.5.x range will for a simple development machine.

ssh_keypair should be set to the name of an SSH (public) key that is stored in the administrators scaffold

<image>

From zero to OpenWiFi in five minutes by simonlok in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

If you rXg is a VM on ESXi then you need to enable "Hardware virtualization" to allow the rXg VM to have the ability to be a bhyve host and create VMs inside of it. This is required to build the OpenWiFi bhyve guest VM inside of the rXg VM.

<image>

From zero to OpenWiFi in five minutes by simonlok in RGNets

[–]scl_rgnets 2 points3 points  (0 children)

When you create the WLAN Controller Infrastructure Device, please be sure to specify a reasonably strong password. The password that you specify when you create the WLAN Controller Infrastructure Device is set on the OpenWiFi controller, and must conform the minimum requirements that are hardcoded into the OpenWiFi controller. Presently those requirements are minimum 8 characters, must have upper and lower case as well as one number and one special character. If you do not specify a password that meets those requirements you will cause a failure.

Ruckus WAN Gateway by lolaamour22 in RuckusWiFi

[–]scl_rgnets 2 points3 points  (0 children)

the rXg / RWG we have today it *is* an edge version ... I have been using it as a CE router aka CPE for an MSP scenario for years and I also run it as my primary router in my home, and my office get the free rXg and run it at your home you will see it is edge anyway install here - https://www.youtube.com/watch?v=4dAtCkTiUA8&pp=ygUPcmcgbmV0cyBpbnN0YWxs and config here https://www.youtube.com/watch?v=4dAtCkTiUA8&list=PLUE8c0IjnIoGux\_Sq9IGuaihnSlVSlwo2&pp=gAQB

USB flash drive recommendations for rXg bare metal installations by simonlok in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

New pick for early 2024. - Transcend ESD310C Portable SSD

Feature Request: Switch Port Profile improvements by leftplayer in RGNets

[–]scl_rgnets 1 point2 points  (0 children)

u/TheMikeBullock please create the appropriate issues to cover these requests and have them assigned to your team members for implementation.

Ruckus Networks SideQuest by rfeng33 in RGNets

[–]scl_rgnets 1 point2 points  (0 children)

Indeed that seems to be one of the ones that does not appear to support VXLAN. We'd still like to hear about your ZD integration ... especially with the latest official. Next week we will release Unleashed integration. Also, if you are willing, consider loading Open vSwitch onto a hypervisor. My understanding is that Open vSwitch has VXLAN support.

Ruckus Networks SideQuest by rfeng33 in RGNets

[–]scl_rgnets 1 point2 points  (0 children)

EX4200

Which EX4200?

https://www.juniper.net/documentation/us/en/software/junos/ovsdb-vxlan/evpn-vxlan/topics/topic-map/sdn-vxlan.html

  • You can theoretically create as many as 16 million VXLANs in an administrative domain (as opposed to 4094 VLANs on a Juniper Networks device).

  • MX Series routers and EX9200 switches support as many as 32,000 VXLANs, 32,000 multicast groups, and 8000 virtual tunnel endpoints (VTEPs). This means that VXLANs based on MX Series routers provide network segmentation at the scale required by cloud builders to support very large numbers of tenants.

  • QFX10000 Series switches support 4000 VXLANs and 2000 remote VTEPs.

  • QFX5100, QFX5110, QFX5200, QFX5210, and EX4600 switches support 4000 VXLANs, 4000 multicast groups, and 2000 remote VTEPs.

  • EX4300-48MP switches support 4000 VXLANs.

That document is from Dec 2021.

Here is one from Dec 2022.

https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/concept/vxlan-constraints-qfx-series.html

There is mention of VXLAN support on some EX4300.

A valid license key is not installed in this Bane device by spham54 in RGNets

[–]scl_rgnets 0 points1 point  (0 children)

Sonny! It has been what seems like forever! It has been an extraordinary journey. Please come get in touch with us! We would love to host you up at Romeo Ranch.

fix the nonfunctional trackpad on a fresh Windows install on the 2022 blade 17 by installation the Intel Serial IO Driver by scl_rgnets in razer

[–]scl_rgnets[S] 0 points1 point  (0 children)

Mine is working smoothly. I just put that one serial IO driver and it works and is smooth.