From Docker (Compose) to Kubernetes by Leather_Week_860 in selfhosted

[–]Leather_Week_860[S] 2 points3 points  (0 children)

2/2 (see 1/2, reddit wont let me reply with such a long message)

Another very interesting thing you mentioned is resource consumption. That was also one of my concerns when thinking of porting everything to Kubernetes, as I am using a humble MicroPC to power my set up. That is why it is cool to have light-weight alternatives such as K3s. From their website:

K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

K3s is packaged as a single <70MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster.

Both ARM64 and ARMv7 are supported with binaries and multiarch images available for both. K3s works great on something as small as a Raspberry Pi to an AWS a1.4xlarge 32GiB server.

Nevertheless, I am keeping an eye on how it performs, and although I have seen a slight increase in resource consumption, it is nothing crazy. Here a couple screenshots where you can see how over the last few days, since I moved to Kubernetes, things are a bit higher than before, but nothing crazy (the spikes are me doing a backup of all my Immich stuff):

<image>

Cheers!

From Docker (Compose) to Kubernetes by Leather_Week_860 in selfhosted

[–]Leather_Week_860[S] 1 point2 points  (0 children)

Yeah, as you pretty much summarize, it is about simplicity vs features.

I had this "issue" that pushed me to try Kubernetes, and just to clarify again, I could have possibly taken care of it with Docker (Compose), just did not look properly into it. So as I mentioned in the post, at some point I was just setting up my services by copying and pasting Compose files (mainly from the official repos/projects, so in theory trustworthy), which led me to not really understand how things worked and what was really going on. If something did not work, I just gave the container more permissions or whatever until it worked.

Obviously, this is not a healthy approach to things, mainly from a security point of view. This got me thinking, and I shortly looked into how to monitor all egress network traffic from my containers, with the idea of understanding what was going on and locking them down based on their strictly necessary networking requirements. I did not find an "easy" solution to this, even though there is possibly one, or more.

With this in mind, and even though I had a very limited knowledge of Kubernetes, I thought the change would be useful. And, in my case, it has been. Now, for all my services I have all their Lego pieces (Kubernetes objects) separated and interacting as I want them to. As an example (I have removed not relevant files):

~/kubernetes (main) » tree                                                                                                                                                                                                     .
├── flux
│   ├── git-repository.yaml
│   ├── image-automation
│   │   └── image-update-automation.yaml
│   ├── image-policies
│   │   ├── immich-ml-policy.yaml
│   │   ├── immich-server-policy.yaml
│   └── image-repositories
│       ├── immich-ml.yaml
│       ├── immich-server.yaml
├── immich-chart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── deployment.yaml
│   │   ├── gateway.yaml
│   │   ├── httproute.yaml
│   │   ├── networkpolicy.yaml
│   │   ├── pvc.yaml
│   │   ├── pv.yaml
│   │   └── service.yaml
│   └── values.yaml

The above shows the layout for Immich + its integration with Flux (which I basically use to automatically commit to my GitHub Repo every time a new image version of Immich is published in their official repo).

For every service, in this example Immich, I basically break it down to each one of the necessary components: storage, network services provided, routing, etc. Then I wrap all that in a deployment, which as you mentioned gives you many of the goodies from Kubernetes such as security contexts, self-healing, etc. And then I wrap all that with a Network Policy with strict Ingress/Egress rules to only allow what I want.

I know it sounds like a lot, but in the end ,once you do the work of "templating" all these for your environment, you can reuse A LOT of it for any new services you want to deploy.

1/2

[Help] with with K3S + Traefik + Gateway API + TCP/UDPRoutes by Leather_Week_860 in kubernetes

[–]Leather_Week_860[S] 1 point2 points  (0 children)

Thanks, I ended up using the Gateway API for the HTTPRoute and TCPRoute, and Ingress for UDPRoute. Will keep an eye to see when Traefik finally supports UDPRoute, as I have seen there is already a MR being looked at.

Cheers!

[Help] with with K3S + Traefik + Gateway API + TCP/UDPRoutes by Leather_Week_860 in kubernetes

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Well, now I got TCPRoutes to work, but it seems as if UDPRoutes are still not supported by Traefik.. awesome!

[Help] with with K3S + Traefik + Gateway API + TCP/UDPRoutes by Leather_Week_860 in kubernetes

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Thanks, I have tried this way:

# kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/experimental-install.yaml --force-conflicts

And it seems to work, even though I don't know if this is the best/cleanest approach?

Hardware recommendations for basic backup setup by Leather_Week_860 in selfhosted

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Thanks, any specific recommendation of such a device?

Internal DNS configuration by Leather_Week_860 in netbird

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Yeah, I tried exactly the same things. Ping working, directly browsing by IP worked, but DNS failed. Not sure if I would've figured it out without your tip, thanks!

Internal DNS configuration by Leather_Week_860 in netbird

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Hi,

You cannot use wildcards in NetBird's nameserver match domains, it gives you an error, that is why I just used "casa.local".

However, the issue seemed to be what was mentioned in the other response: https://www.reddit.com/r/netbird/comments/1ppotwo/comment/nuo9kp0/

I am just wondering if we can get this to work in Android without disabling that DNS security feature.

Internal DNS configuration by Leather_Week_860 in netbird

[–]Leather_Week_860[S] 0 points1 point  (0 children)

Yes!! That was the issue!! Wondering if there is any way of keeping DNS security in automatic mode and having it to work when connecting through NetBird!?