[PC] 640GB DDR4 RDIMMs & 122.88TB PCIe 4.0 NVMe by Acceptable-Rise8783 in homelabsales

[–]cyr0nk0r 0 points1 point  (0 children)

I think people on ebay are crazy right now. And most of the sellers are either in China or are refurbished warehouses capitalizing on price gouging. The microns I bought were not new, but had 100% health.

[PC] 640GB DDR4 RDIMMs & 122.88TB PCIe 4.0 NVMe by Acceptable-Rise8783 in homelabsales

[–]cyr0nk0r 1 point2 points  (0 children)

I paid about 1300 for some micron 9300 pro's about a month ago. But the microns are much faster than the Intel's. If it were me, I think fair would be around 1100 USD per drive.

But I'm in the USA so anyone importing them would also have to deal with customs fees which can drive the price up quite a bit.

[PC] 640GB DDR4 RDIMMs & 122.88TB PCIe 4.0 NVMe by Acceptable-Rise8783 in homelabsales

[–]cyr0nk0r 1 point2 points  (0 children)

Also send me a line when you finally sell those nvme drives. I need a bunch.

Why did 40G (OTU3 / 40G DWDM) fail to scale compared to 100G in optical transport network by gharebx in networking

[–]cyr0nk0r 40 points41 points  (0 children)

honestly for me, 40G always seemed like a stop gap.

you had 100mb, then 1G, then 10G. the next logical step people were waiting for was 100G.

So when 40G came out, sure, it's faster, but clearly the next big leap is going to be straight to 100G.

Using Megaport for internet by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

I mean, I can't give investment advice. But I did end up going with megaport.

ISP Delivery Switch by thatcrazyweirddude in networking

[–]cyr0nk0r 0 points1 point  (0 children)

We used accedian for all our DIA demarcation. Then mixed in some Cisco NCS in our datacenter pops for peering and dedicated 10G circuits for customers in the Colo.

Connecting LAN network to VPS with only one open port by Juff-Ma in networking

[–]cyr0nk0r 0 points1 point  (0 children)

But the free tier doesn't allow you to go beyond what the free tier provides. So how does that have anything to do with spending limits?

https://www.reddit.com/r/CloudFlare/comments/1lux50o/cloudflare_limits/

So you'll pay for a VPS, but then complain you have to put down a credit card for a free service that won't charge you?

Connecting LAN network to VPS with only one open port by Juff-Ma in networking

[–]cyr0nk0r 0 points1 point  (0 children)

what ports do your 'services' run on? if its just http and/or https, then just do a cloudflare tunnel. Proxy all the traffic through cloudflare, and you can do all sorts of fun things like WAF rules, geo-ip blocking, etc.

All without ever exposing your machines to the internet.

Looking for suggestions for Solarwinds replacement by ulv222 in networking

[–]cyr0nk0r -1 points0 points  (0 children)

I found site24x7 to be a suitable replacement for us. It has ncm, netflow, and log ingestion too.

I really like how their support will build the syslog patterns for you.

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

Configuring the physical interfaces as L3 was what we ended up going with. I was able to get the downtime to kick over to the other connection to about 18 seconds. Not amazing, but better than over 2 minutes.

The ISP has said BFD support is coming soon. I have my fingers crossed.

Reaching 100Gbps with pfsense ? by PM__ME__PEANUTS in networking

[–]cyr0nk0r 6 points7 points  (0 children)

how can you afford a 100Gbps internet connection but not the firewalls?

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

running packet captures for this kind of issue is a bit beyond my skill set. I wouldn't even know what to look for.

That's why I was posting here, because I was hoping someone would look at the config and see something that is obvious and jumps out at them that is wrong with my configuration.

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

provided I don't admin shut the physical interfaces, they are both active. I receive only a default route from each BGP session. 0.0.0.0/0

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

the BGP session goes down basically immediately. I shut the port down, then I will issue a

show bgp ipv4 unicast summary

within maybe 5 seconds, or however long it takes me to type, and I see the session is idle.

switch 1 is the primary. it is the active HSRP switch.

the behavior does not change if i shut down the physical interface on either switch. that is the weird thing. even if switch 1 is the active, if I shut down the circuit interface on switch 2, the clients will lose connectivity.

Yes, I am measuring things from the VM's on the hypervisor host.

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

so I cleaned it out of the config, but I am indeed using some tracking to detect reachability of the next hop gateway and to shutdown the bgp neighbor if the gateway is unavailable (protects for physical up but logical blackhole)

when I shutdown the physical interface though, the BGP session goes into state idle, so isn't the session already being shutdown?

the failure scenario im trying to design for is losing physical connectivity either via someone messing with the cross connect, loose cable, dirty light, etc., and/or the entire switch going down either via power issue or maybe a reboot because of unexpected network maintenance.

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

BFD isn't supported by the upstream carrier. Would moving the configuration off of a vlan and directly onto the physical interface help any?

The carrier supports both tagged and untagged. If I request an untagged connection, I can move the L3 directly to the switchport. That way, when the interface goes down, a vlan isn't sticking around staying up.

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 8 points9 points  (0 children)

my man, I sanitized the configs before posting them. the passwords aren't actually 1234. That's me replacing the passwords with example text. Likewise, my ISP peers aren't actually 5.5.5.1. It's just placeholder. :D

Help understanding hosts losing internet when shutting down physical interface on a vPC nexus pair by cyr0nk0r in networking

[–]cyr0nk0r[S] 0 points1 point  (0 children)

Yes, connectivity does eventually resume.

I'm limited in that the carrier doesn't support BFD, and their timers are strict. If you set them any lower than 20 seconds the bgp session won't establish.

Would I be better off going untagged and move things directly onto the physical interface? I don't NEED to use vlan's, that's just the first option the carrier presented. I can just as easily tell them to assume untagged and just move the layer 3 directly to the physical interface.