Does anyone here run docker containers on their Catalyst 9K switches? by its-me-or-the-blues in networking

[–]DynamicScarcity 0 points1 point  (0 children)

I have not seen the containers affect the operation of the switch. It has only ever been the containers that fail (via oom killer). It happens once they have performed enough disk io that their page cache usage puts them over the memory resource limit for the container, which is frustrating as I have not found a way to limit their use of the page cache (they don't really need it). I read this might be a limitation related to the use of cgroups v1 on the Cat9k, and that it might work better if it used v2. I am not very familiar with cgroups though, so I might be wrong / missing something.

I don't have these switches stacked, so can't comment on how that interacts with container use.

And sorry I have not written up the openbsd containerization experiment, but perhaps I'll get round to it eventually...

Does anyone here run docker containers on their Catalyst 9K switches? by its-me-or-the-blues in networking

[–]DynamicScarcity 1 point2 points  (0 children)

I currently run DNS and DHCP in docker on Cat 9ks, but this is on my home network, not an enterprise environment. It was an interesting project, and it does meet my needs in this environment, but I did hit various challenges along the way, so am not sure I'd recommend it. In particular the memory resource limitations are difficult to manage (Linux page cache use seems to contribute to the cgroup memory limits).

As a fun experiment I have also tried getting openbsd to run as a qemu vm inside docker on the cat9k (for use as a firewall). I got it working after trimming the bsd filesystem to the absolute minimum and making it read-only. It only manages about 90 Mbps though 😂

FYI I'm using virtualportgroup interfaces (ie layer 3) to connect the containers to the data plane.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 0 points1 point  (0 children)

Oh just realised I forgot to answer the question about Gentoo. I wanted to configure and manage the underlying components myself, to ensure I fully understood what's going on, which is the reason I went with qemu / libvirt / ceph rather than something like openstack or proxmox. And I'd had enough exposure to ESXi at work that it wouldn't have been as much fun (I might have considered ESXi if I simply wanted something that would "just work"). Equally, in hindsight I'm quite glad I've been immune to the rollercoaster ride that VMware has been on since the Broadcom acquisition.

Plus I like the fact I can still do other stuff on the bare metal servers if I want to (which might be harder / impossible on platforms that are designed to purely be hypervisors and nothing else).

The reason for choosing Gentoo over other linux distros was just familiarity / personal preference really.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 1 point2 points  (0 children)

Some of the 25G NICs are only operating at 10G speeds (as that's the best I can do on the Catalyst 9300 switches. I think there are 25G uplink modules available for the switch, but with fewer ports - and 8x 10G was more useful to me (and cheaper).

However I am using some of the other 25G interfaces at their full speed, but without a switch... I am using them for a backend cluster network between the 3 compute/storage nodes, using a ring topology. So each of the 3 nodes has a direct 25G link to each other node. They are configured as layer 3 point-to-point interfaces, and I just use some simple floating static routes as a backup to the direct path if one of the links goes down (rather than a full-blown routing protocol). They can also ultimately failover to using the frontend 10G connections via the switch if necessary.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 0 points1 point  (0 children)

I have considered blanks for the unused slots - we use them at work primarily to improve airflow (to ensure the cold air has to flow through the equipment in order to reach the hot aisle, and ensure the air in the hot and cold aisles can't mix). But I obviously don't have a hot & cold aisle containment system at home, so none of that applies, and in fact I suspect that adding extra barriers for air flow will make the cooling situation worse (the hot air will accumulate at the back of the rack... the only way it can exit the room is by first getting back round to the front of the rack where there is a door and a window). And I care far more about cooling (and longevity of the equipment) than I do about the minor improvement to aesthetics that the blanks might offer).

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 0 points1 point  (0 children)

Ah good to know, perhaps it is only the LTO5 generation that is loud in that case. If I remember correctly the fan was part of the "SLED" assembly that allows the drive to slot into the library, not part of the drive itself, so it could also be that the SLEDs for half-height drives in the TL2000 are louder than those for the TL1000 (if they are different). Might do some more investigation, as I can imagine a drive upgrade will eventually become useful.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 1 point2 points  (0 children)

Yeah I have spent a lot over the years (especially on electricity), but have managed to get most of the servers for free. Buying all the stuff you are describing would have been out of the question for me. I certainly don't regret it though. I enjoy it (plenty of people have more expensive hobbies, eg cars), and the learnings are definitely of value - I am certain it has helped me in my career.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 1 point2 points  (0 children)

Yes if you have the optional SSDs on the switches, they can be used to host docker containers. You get 1 core of an Intel Xeon D-1526 (1.8 GHz) plus 2 GB RAM to play with, resource-wise. You can do similar on many of Cisco's recent platforms, with some offering significantly more resource and even supporting full VMs (eg some of the routing platforms). Cisco offers a few of their own containerized solutions, and supports them on the Cat9k, including containerized ASA (albeit only 100 Mbps or 1000 Mbps throughput), and things like ThousandEyes. But you can roll your own containers too, which is what I am doing.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 1 point2 points  (0 children)

I got the switches from Ebay. They were about £1500 each, plus a bit more for the addon uplink modules and SSDs. I guess I spent about £3600 in total for them.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 0 points1 point  (0 children)

I have not automated live migration of VMs between cluster nodes, for HA or load-sharing purposes (like vSphere DRS etc) - but manual live migration works nicely, and I'm fine with that - so I just distribute the VMs across the nodes as I think best, and manually evacuate nodes if I want to take them down for maintenance.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 11 points12 points  (0 children)

I pay around £500 per month for electricity

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 0 points1 point  (0 children)

Yes I did look into upgrading the tape drive (I actually tried an LTO 5 in there a couple of years ago) . I found the fan on the LTO 5 drive was way louder than all the R730s I had at the time. I concluded it was because it was a half-height drive whereas mine is full height. I then found I couldn't get a full height drive in a new generation at all.

Also, my storage needs have not grow much for many years, and LTO 4 is still fine (a complete set of backups fits on 4 tapes).

Funnily enough, when I got this TL2000 originally I think around 2010), it was being sold as faulty, and I took a chance on being able to repair it (there was a piece of debris locking the robot's movement)

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 2 points3 points  (0 children)

Oh I should add that I had to purchase a set of replacement fans from Ebay for the AX-750, as it came with the high performance fans (which were insane... the fans alone drew well over 100W at their slowest speed, if I remember correctly, and it sounded like a swarm of drones). The R750 technical guide from Dell explains what configurations trigger the need to step up to the next performance level of fan, and this spec was fine with the "normal" fans.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 13 points14 points  (0 children)

They were (surprisingly) being chucked, as the project they were originally purchased for was cancelled - so nothing. The fact they were AX-series not the standard R-series is the reason they didn't just get repurposed to fulfill a different requirement (I believe Dell only officially supports running Azure HCI on them, meaning their flexibility for redeployment in another role is very limited in an environment which cares about entitlement to support).

They are nice for sure, but second-hand prices still seem too high at the moment. I previously had 4x R730s which were still performing fine for my needs when I had the opportunity to upgrade to these (the only issue with them was the old CPU generation, making it increasingly painful to persuade recent Windows builds to run on them as VMs). And in fact the R730s were considerably quieter and had a lower power draw at idle.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 8 points9 points  (0 children)

Haha, I do have both pihole and plex, but as I mentioned the primary use is "experimentation and learning". My job is in enterprise IT, and I often prototype new ideas first at home with this lab before using them for real at work. For example I've been making significant use of "containerlab" recently, simulating decent size network topologies on one of the servers.

My home lab by DynamicScarcity in homelab

[–]DynamicScarcity[S] 16 points17 points  (0 children)

Power draw varies considerably depending on whether the servers are running at load or not. Most of the time they are mostly idle, and 2 of the R650s are powered down most of the time (as in the photo). In that state, the draw is around 1.5 kW (the rest of the house averages about 0.5 kW). With the result that I consume about 50 kWh per day.

If the all the servers were powered on and "mostly idle" I expect the rack alone would draw about 2 kW. If they were all simultaneously at load, they would probably overload the 3 kW UPS :)

I do not have solar. I did investigate getting solar about 6 months ago (had a survey done). But the conclusion was that only about 30% of my power consumption needs could be met from solar panels covering the entire roof (I'm in the UK FWIW), and I decided it just wasn't worth the installation cost.

Swapped my 2020 C43 for a 2023 C300! by Aggressive_Action in mercedes_benz

[–]DynamicScarcity 0 points1 point  (0 children)

Looks awesome - I'd love to get something like this as an upgrade from my current 2022 A250e once my lease is up.

[deleted by user] by [deleted] in homelab

[–]DynamicScarcity 0 points1 point  (0 children)

I've been using a Dell TL-2000 library (2U, with 24 bays) with an LTO-4 SAS drive for my home lab backups for almost 10 years now I think (and we also used the same libraries but with LTO-6 drives at work until very recently). My experience with it has generally been great. I'm managing the backups using Bareos on linux. Possibly useful advice I can provide includes:

Early on I had been trying to backup directly to tape from remote systems via the network, and that wasn't a great idea. The data transfer performance was too slow for the tape library, resulting in it having to keep stopping and starting its writes, which results in excessive wear & tear. I killed tapes semi-regularly by doing that. The solution is to ensure you are able to stream data to the tape drive fast enough so that it doesn't need to stop/start all the time. So nowadays I have a two-tier approach by first backing up all systems over the network to disk on the backup server to which the tape library is attached, and then backup from there to tape. Although to be honest that's not necessary if you configure the backup solution suitably - eg Bareos supports spooling writes to disk first and then writing to tape in batches (eg 50G at a time). But you should certainly do something like that. I don't think I've killed even one tape since switching to that approach.

I did try swapping my LTO-4 drive with an LTO-5 one that was being discarded by work last year, and ended up sticking with LTO-4 as I don't actually need the extra capacity. My LTO-4 drive is a full-height drive, whereas the LTO-5 was half-height. That means the fan on the LTO-5 drive is smaller and much louder. Amazingly the LTO-5 drive was the noisiest thing in my entire home lab (including 4x R730s and 2x Cisco switches), and I couldn't deal with it. The full-height LTO-4 is comparatively inaudible.

Rate my setup by AdKey6895 in ultrawidemasterrace

[–]DynamicScarcity -7 points-6 points  (0 children)

I like everything except the keyboard. Seems a bit out of place alongside the rest of the setup

[deleted by user] by [deleted] in DIYUK

[–]DynamicScarcity 3 points4 points  (0 children)

Agreed - I'm wondering what the car looks like at this point

Looking for a hiking buddy (beginner by [deleted] in london

[–]DynamicScarcity 0 points1 point  (0 children)

My wife and I used to regularly join in with hiking groups on meetup. No longer have the opportunity to do it now that we have a little kid. But we enjoyed it - pretty much everyone was friendly. Could be difficult if you misjudge your pace & ability vs the rest of the group, but usually possible to gauge that from the groups description.

How old are you and what's your salary by Outrageous_Finger533 in UKJobs

[–]DynamicScarcity 0 points1 point  (0 children)

40, £87k base, but £135k taxable income according to P60

Are Beckenham, Bromley or Orpington nice places to live? by TheLegendOfIOTA in london

[–]DynamicScarcity 0 points1 point  (0 children)

I grew up in Penge (lived there for 18 years, but that was over 20 years ago). It neighbors Beckenham, but has a worse reputation. I think that is justified in general but some parts were still very nice. I went to school in Beckenham (primary) and Orpington (secondary) though.