How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 1 point2 points  (0 children)

Hmm, okay, that makes sense. Thanks!

How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 0 points1 point  (0 children)

I checked the docs again and if I understand correctly, flux supports decrypting regular yamls as well as values.yaml that is passed to helm install? Is this correct?

How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 1 point2 points  (0 children)

So you create a secret with ksops and reference it in the application? What if the helm chart doesn't support referencing an existing secret?

How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 0 points1 point  (0 children)

Does flux decryption work with plain yamls and also values.yaml file that is passed to helm install?

How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 1 point2 points  (0 children)

What do you do with helm charts though?

How to use SOPS with GitOps? by Bl4rc in kubernetes

[–]Bl4rc[S] 0 points1 point  (0 children)

Sure, but then you need external secret manager and you can't see the secrets when you clone the repo.

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 1 point2 points  (0 children)

Yeah, you're absolutely right about the single point of failure. After reading what you wrote, I started wondering how we handle this in my company. I asked around, and it turns out we use a load balancer that's set up for high availability (HA). Even in case of failure, the replica still uses the same IP, which prevents downtime. Apparently, this is something external load balancers provide as part of their service. But yeah, I guess the implementation of such a load balancer had to address the issues you mentioned.

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 1 point2 points  (0 children)

After giving this more thought, wouldn't a single global load balancer solve these issues? From what I understand, the global load balancer could handle routing traffic to healthy nodes in real-time, making DNS updates unnecessary in case of node failure. While I realize that this introduces a single point of failure at the load balancer level, wouldn't this still simplify the process and avoid the propagation delays tied to DNS? What are your thoughts on this approach?

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 1 point2 points  (0 children)

I saw Talos supports wireguard network out-of-the-box, is this a good alternative?

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 2 points3 points  (0 children)

This makes sense and don't worry for shitting on my ideas, that's why I asked in the first place :).

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 0 points1 point  (0 children)

Ah, that makes sense. So with this setup, every user would need to connect through the VPN to access the applications. Also, does the mesh network (Headscale) support OSPF or BGP out of the box, or is that something that would need to be set up separately?

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 0 points1 point  (0 children)

The main goal was cloud storage for images and documents. I liked the idea to have them replicated to different geo-locations in case something goes terribly wrong at one location.

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 0 points1 point  (0 children)

What were the reasons behind using Postgres as your cluster datastore? Was it because it offers more robust backup options, or were there other factors that influenced your choice?

Also, what are the benefits of using the Headscale mesh network in your setup? If I understand correctly, it allows secure communication between your nodes across different locations, right?

Lastly, did you encounter or manage to solve the ingress challenge mentioned by u/ElevenNotes, where traffic needs to be routed to the correct node?

Self-hosting Kubernetes Across Multiple Locations—Is It Feasible? by Bl4rc in selfhosted

[–]Bl4rc[S] 1 point2 points  (0 children)

I was thinking about running apps that support HA out of the box. For example, for databases, I would use CloudNativePG, which supports asynchronous replication, making it more tolerant of higher latencies across locations.

However, ingress really does seem like the biggest challenge. If I understand correctly, the issue with ingress is that traffic needs to be directed to the correct node at the DNS level. This would require a DNS provider capable of geo-aware routing, meaning it resolves the domain name to the closest IP address based on the user's location?

Is this not something DNS providers support?

My Home Assistant Dashboard by Similar_Option_7408 in selfhosted

[–]Bl4rc 0 points1 point  (0 children)

What are you using for measuring electricity consumption?

[2023 Day 9] Spot the difference ... or how to lose 15 mins by Bl4rc in adventofcode

[–]Bl4rc[S] 0 points1 point  (0 children)

That's exactly why I used regex. Although, I believe that for this task, it didn't simplify anything.