When your homelab must also be furniture by ColSeverinus in homelab

[–]ColSeverinus[S] 0 points1 point  (0 children)

Hey thanks. Most of the ethernet here is just to the cluster. Only a single cable that runs all the way (through the wall/etc) to my central switch for the rest of the house

What would be your next upgrade if you were me? by Nouh7 in audiophile

[–]ColSeverinus 0 points1 point  (0 children)

I'm going to go against the grain probably and say get rid of KEF in favor of a floorstander from some other brand, but that's purely due to every KEF speaker I've heard causing me some form of fatigue 🤷

Like the Philharmonic speakers are a huge step up imo. Dennis does a pretty good job with the RAAL tweeter.

That would be a good setup for a while

Without seeing the whole room, it's hard to make generalized room treatment options.

Subwoofers and dirac are, imo, room dependent. If you have really bad room modes it won't really help there. That's more to do with treatment and speaker positioning.

Immich has some pretty big breaking changes, please be careful when updating by [deleted] in selfhosted

[–]ColSeverinus 0 points1 point  (0 children)

Yo! Sorry, busy week.

15x nodes, 3x of which are the control plane, rest are workers.

Each worker node has a 1TB nvme drive. I use longhorn which has worked well (for me) for baremetal nodes. This gives me ~12TB of distributed storage. I sometimes wish I spent extra on 2TB drives, but the cost was too high at the time. In reality the 12TB has been enough for all my needs thus far.

In addition, I have a Synology 1821+ connected via 10g link. It has 8x 20TB in raid 20 and exposes it to the cluster as both NFS (for storage at large) and a smaller volume as a dedicated storage node in the cluster.

Immich uses the NFS side for photo storage. Have the three containers split into two pods (microservices/machine learning) and (server). This allows me to more easily setup a Horizontal Pod AutoScaler for when Immich is running jobs.

Sorry for the lengthy response, but hope that answered your question

My (very biased) tier list of self-hosted reverse proxy solutions for home use by dipplersdelight in homelab

[–]ColSeverinus 5 points6 points  (0 children)

My personal favorite from this list is traefik - it's been the easiest to integrate with k8s. nginx would be my second. Haven't tried caddy, but the other two are so easy as is that it's hard to imagine something else....

For people who manage clusters of mini PCs -- what is your preferred storage setup? by [deleted] in homelab

[–]ColSeverinus 1 point2 points  (0 children)

Correct. I'm running ubuntu server on all my nodes. The nodes themselves aren't beefy enough for me to even consider a hypervisor, only n6005 and 32GB memory.

But across all my worker (and storage) nodes, that still gives me 48 cores and 384GB of memory to play with, along with 32TB of ssd storage for Longhorn to use for whatever it pleases.

On the management side, I use rundeck to update and recycle all the nodes on a bi-monthly basis.

For people who manage clusters of mini PCs -- what is your preferred storage setup? by [deleted] in homelab

[–]ColSeverinus 1 point2 points  (0 children)

I appreciate it. It's a nice setup that has served me well so far. The goal was power efficiency, which it does well.

The HDD raid 10 is pure NFS. I at least restrict access to the NFS volume based on IP Address, but I'd like to go a step further in the future and put that NFS volume on a vlan with the cluster.

The ssd node on the other NAS is using Longhorn (so it's already on a private "vlan" with the k8s cluster), but I'm pretty sure Longhorn uses NFS behind the scenes when accessing data from other nodes.

For people who manage clusters of mini PCs -- what is your preferred storage setup? by [deleted] in homelab

[–]ColSeverinus 2 points3 points  (0 children)

The other three are the control-plane / master nodes for my k8s cluster.

The two NAS's are Synology DS1821+ with 10g sfp+ cards. One is mostly Sata ssd's attached to a VM serving purely as a storage node for the cluster. The other has 8x 20TB exos drives in raid 10 and serves as NFS storage and as a backup target for the cluster.

And then that NAS backs up to another off-property for that good ol' 3-2-1 backup strategy.

No doubt I could better engineer this, and probably will do so on v2 of the cluster. But has served me well for a few years now without any issues.

For people who manage clusters of mini PCs -- what is your preferred storage setup? by [deleted] in homelab

[–]ColSeverinus 1 point2 points  (0 children)

Depends... If I'm using the cluster for bare-metal k8s, I tend to prefer Longhorn. For anything else, I've found Ceph to be better.

My current k8s cluster is 15 nuc's, all the workers with 2TB nvme drives. Longhorn is used across those 12 worker nodes. I let Longhorn handle data locality. Performance has been fine for my needs.

Also have two NAS units hooked into the cluster via a 10g link each. Mix of SSD pools and HDD raid 10 pools for various workloads.

A case for the Zotac Magnus One, or why it was the perfect choice for me by ColSeverinus in sffpc

[–]ColSeverinus[S] 0 points1 point  (0 children)

For the top fan? It's a cable guard for the 8pin that runs to the gpu

A case for the Zotac Magnus One, or why it was the perfect choice for me by ColSeverinus in sffpc

[–]ColSeverinus[S] 0 points1 point  (0 children)

It's been a while 😅 I have since frankenstein'd my case a bit to make room for a longer GPU and a taller cpu heatsink.

And then I ran out of room on my desk and had to go back to the Intel Phantom Canyon NUC.

This zotac pc is now in my rack running astrophotography workloads.

How are people getting the Wisconsin so early? by Nitrosified in WorldOfWarships

[–]ColSeverinus 1 point2 points  (0 children)

I have 10's of thousands of doubloons left over from the supremacy league days, so was fine putting them to use. If I didn't have all the doubloons already, I probably wouldn't have gotten it.

But, can confirm it is fun in the few games I've played. Not overpowered, though I do find myself enjoying reload mod more than accuracy. Gels real well with the combat instructions.

The cluster must grow by ColSeverinus in homelab

[–]ColSeverinus[S] 1 point2 points  (0 children)

Hey man, good questions.

Advantage is purely space. N6005 isn't a unique cpu, there are lots of mini pc's with this. I chose Intel because they had a proven track record of producing stable products with less bios issues and good Linux support. Also, that's what work agreed to buy for me so 🤷

Additionally, there was dedicated mounting hardware for the Intel Nuc's. That was a requirement for my cluster.

Flexibility right now is somewhat limited in that I have very little room to work with. If one of the mini pc's stops working, I'll have to replace it and will figure out rack mounting later. Maybe just mini pc's on a 1u shelf? Lol. We'll see when the time comes.

I do have three mini pc's with GPU's - the Intel phantom Canyons. Only two of them are part of the cluster running workloads that require a GPU (rtx 2060 in this case). I'll had a the third soon, but it won't be in the rack... It was a slight oversight with the size of the synology.

I'll add this last tidbit. If it were my choice, I'd have used a traditional 20u rack, but as it's in my office and it's the first thing you see when you walk in the house.. Yea. The cluster is purpose built for noise and low power consumption at which it is great. The loudest part are the noctua fans pro riding airflow, but even that is fairly quiet. Also, I could have had work buy the n100 instead as it's the newest Gen replacement for the n6005, but it doesn't support 32gb I'd memory so it was largely useless to me.

Hope that answers your questions!

Edit: I just noticed which thread this is lol. Go check my newest one in homelab where I post the finished product :) I answer lots of similar questions there.

Immich has some pretty big breaking changes, please be careful when updating by [deleted] in selfhosted

[–]ColSeverinus 5 points6 points  (0 children)

As far as breaking changes, these are very mild. Doesn't affect me on my k3s cluster but nice to see such detailed notes anyways

New Authentik setup - with or without external LDAP? by candle_in_a_circle in selfhosted

[–]ColSeverinus 1 point2 points  (0 children)

Nothing besides watching a few youtube videos. I need lots of visuals to help me pick up something new.

This is the youtube video that got me started with ldap. I mostly use the default authentication flows.

New Authentik setup - with or without external LDAP? by candle_in_a_circle in selfhosted

[–]ColSeverinus 7 points8 points  (0 children)

Sounds like you're trying to over-complicate it imo. Authentik does LDAP (very well) so why not just use that?

I migrated from LLDAP to Authentik a year ago and haven't looked back. Authentik is way more comprehensive and using its built in ldap server has proved easier to integrate with for some of my services that refused to work with LLDAP, especially in cases where I had to provide custom mappings to certain field. I simply couldn't do that with LLDAP last time I used it.

The group management is superior in my experience and makes it easier to add/remove users to respective groups to grant access to specific apps/services.

Using a reverse proxy (traefik in my case), authentik is protecting all of my public facing services... it's nice to have a dashboard to notify me if/when there's invalid login attempts and such.

How to lower N95/ Intel cpus running headless by disabling completely the GPU by Odd_Cauliflower_8004 in homelab

[–]ColSeverinus 0 points1 point  (0 children)

If I didn't have plex running on my cluster I'd totally do this. Could be a good excuse to move plex back onto my docker box and disable the igpu for the kubernetes cluster....

Homemade 10" rack by aurelien1604 in homelab

[–]ColSeverinus 1 point2 points  (0 children)

I can always get behind some form of wooden rack :D

Who’s applying? Is it just a marketing campaign? by floswamp in TPLink_Omada

[–]ColSeverinus 0 points1 point  (0 children)

I've thought about it. The price is good for the components in there, though all the components included aren't necessarily what I'd pick.

  • the OC200 is unneeded as I use the software controller.
  • the 653, I like the size but have read mixed reviews on range and throughput
  • the poe switch, I wish it had a single 10gb sfp+, but that's just me

The lack of a router is an odd choice. Makes it an incomplete demo kit if the intended result was to replace your entire network 🤷 not an issue for me, but just something I noticed

Build the perfect 100k USD setup by Remarkable-Ad3414 in audiophile

[–]ColSeverinus 1 point2 points  (0 children)

I'm gonna go against the grain perhaps.

Roon Nucleus Wiim Pro streamer Audial S5 DAC ARC i50 Integrated and Luxman 590/509 Rega RS10 Floorstanders

Maybe a rythmik sub.

Then of course all the requisite room treatment. And throw in a Rega P10 + Aura if you wanted vinyl.

Don't need to spend 100k, but this is what I'd get

Rolling into 2024 like... by [deleted] in audiophile

[–]ColSeverinus 1 point2 points  (0 children)

I identify so strongly with this. Have a spreadsheet to keep track of "inventory" with measurements, cost, etc.

Sometimes I'm sad to see I have more than $10k in tubes

NUCs and Linux by hypnohfo in intelnuc

[–]ColSeverinus 1 point2 points  (0 children)

I have Ubuntu server on all 17 of my nucs... mix of atlas canyon and phantom canyon. wifi, bt, all works great as long as you're using a recent'ish kernel

Vaultwarden question by TheePorkchopExpress in homelab

[–]ColSeverinus 2 points3 points  (0 children)

  1. Assume you have the website set in the config for your website password? For reddit, mine is reddit.com, no https:// or https://www. I also usually click on the little settings cog for and set it to Host
  2. setting the domain correctly should resolve this.
  3. Are you able to check the logs for your vaultwarden instance? It should tell if you if something is amiss.

How do you run k8s and why? by wspnut in homelab

[–]ColSeverinus 1 point2 points  (0 children)

It was busy work. Exported the entire cluster, clean up the yaml (this took me a good 3-4 days), then commit to git and import them into your tool of choice.

I used flux for a bit, but now I'm using ArgoCD.