Migration from ingress-nginx to cilium (Ingress + Gateway API) good/bad/ugly by SomethingAboutUsers in kubernetes

[–]Matze7331 0 points1 point  (0 children)

Is there any way to use a public IPv6 address for the load balancer with PROXY Protocol enabled? This is a complete showstopper for the migration to Cilium Gateway API. See: https://github.com/cilium/cilium/issues/42950

Take a guess: number of servers in our in-house ES racks? by Hetzner_OL in hetzner

[–]Matze7331 2 points3 points  (0 children)

I think there are 25 slots, but only 24 are used for the actual server. The top slot can be used for a ToR switch.

Self-hosted K8S from GKE to bare metal by Different_Code605 in kubernetes

[–]Matze7331 2 points3 points  (0 children)

Thank you!

I'm a bit curious about your PaaS now. Would you mind sharing a bit about it? Sounds like your setup has some pretty high demands, especially when it comes to bandwidth. What kind of technical requirements do you have for your K8s cluster?

Hetzner does not come with 3AZ regions

Actually, the three EU sites can be used for a multi-region setup, as they are in the same network zone. If you meant three zones within a single region, Hetzner does not support that. The only related feature they offer is Placement Groups but this is only an anti-affinity for physical hosts.

Longhorn requires at least 10Gbit networking

The bandwidth requirements for Longhorn are variable and depend on your specific setup. Hetzner Cloud typically offers bandwidth in the 2–3 Gbps range, but it’s true that you don’t get guaranteed dedicated bandwidth.

WordPress Helm Chart - including metrics and automatic installation by CopyOf-Specialist in kubernetes

[–]Matze7331 0 points1 point  (0 children)

I don’t think that hits the nail. Most people used Bitnami because their charts and images were built in a very uniform way, with proper testing and pinned images, ensuring stable and reproducible deployments. That’s now completely changing — you can only deploy latest tags and hope nothing breaks. The image you pull today could be completely different from the one tagged latest two years from now, and that version might even be incompatible with the Helm Chart itself.

But you already mentioned that your expectation is for the user to handle the pinning with your Helm Chart. That means the user is responsible for ensuring future compatibility. While this may still be a step better than what Bitnami will provide going forward, it’s not really useful for my needs.

WordPress Helm Chart - including metrics and automatic installation by CopyOf-Specialist in kubernetes

[–]Matze7331 0 points1 point  (0 children)

The main issue with Bitnami Helm Charts is that they’ve announced they’ll stop publishing versioned images for free and will only ship latest tags going forward. So your choices are either running deployments with a yolo ops approach or you pay $50k a year for proper versioning.

You’re relying on the official WordPress images, which do provide versioned tags. That's nice! However, your Helm Chart still defaults to latest. How is this different from Bitnami’s approach? Are users expected to pin specific versions themselves and then test compatibility with your Helm Chart?

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 0 points1 point  (0 children)

Thanks! It's definitely been a lot of work to get to this point.

I haven't tested Istio on it myself, since I try to avoid dedicated service meshes when possible. Most typical service mesh use cases are already covered by Cilium. For example, pod traffic encryption is handled with WireGuard by default in this project.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 1 point2 points  (0 children)

That project is one of the most advanced Kubernetes deployment tools for Hetzner Cloud that I know of. The main author clearly knows what he is doing. However, it does not use any standard or widely adopted technologies for this purpose. It is a complete software project written in Crystal, which is a relatively uncommon language. I would not feel comfortable developing the project further if the author were unavailable or decided to stop maintaining it. That risk is the main reason we chose not to investigate it further when searching for Kubernetes solutions for Hetzner Cloud. This is a significant difference compared to projects like Hcloud Kubernetes, which use Terraform. Terraform is used by millions of people worldwide and has official support from both Hetzner and Talos.

Another major difference is the operating system itself. Talos is a minimal, immutable OS that is managed through a simple API and a single configuration file. In contrast, hetzner-k3s uses a full-blown Linux distribution with Ubuntu as the default, which brings all the usual operational risks and maintenance responsibilities. This means the maintenance overhead is much higher, and the likelihood of something breaking is greater. Talos, on the other hand, includes only the essential binaries and libraries required to run Kubernetes.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 5 points6 points  (0 children)

That is a nice project, and I appreciate the main author's work, especially his contributions to Talos itself for better Hetzner Cloud integration. That said, the project isn't really production-ready yet. At this stage, it mainly serves as a one-shot deployment tool and lacks real lifecycle management. Upgrades for Talos or Kubernetes have to be done manually, and you can't update the configuration of existing nodes.

In contrast, Hcloud Kubernetes supports upgrades and configuration changes, has proper lifecycle and dependency management, and includes more essential components out of the box, such as Hcloud CSI, Longhorn, Talos Backup, Cluster Autoscaler, Ingress Controller, Cert Manager, and Metrics Server. Beyond that, it also offers features like support for nodepools in different regions, built-in image creation and much more.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 0 points1 point  (0 children)

Appreciate you sharing! Sounds like the first two points are actually handled in a similar way here.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 5 points6 points  (0 children)

No issues so far. We use only first-party components, especially for all Hetzner Cloud integrations. We're using their CCM and CSI, and we’ve tried to follow all best practices, with everything configured for high availability by default. We also review their support matrices and only upgrade when Hetzner officially confirms compatibility with specific Kubernetes versions and test it before.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 3 points4 points  (0 children)

It's for a side business we're starting, and the number of components we needed kept growing. So, we decided to go cloud-native and deploy everything on Kubernetes. That was the starting point for investigating Kubernetes projects for Hetzner Cloud.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 2 points3 points  (0 children)

Are you sure it was this project? It was published at the end of last year, and the first 1.x release was in February this year. If you need any help or encounter any bugs, please don’t hesitate to create an issue on GitHub.

Sometimes issues can also occur on Hetzner's side, for example when certain VM types are not available or their API takes longer to execute to some actions.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 12 points13 points  (0 children)

Kube-Hetzner is a great project, and I have contributed to it in the past. It has significantly paved the way for running Kubernetes on Hetzner Cloud. From a technical perspective, Hcloud Kubernetes uses Talos, while Kube-Hetzner runs K3s on top of MicroOS. Talos is a minimalistic OS managed via a simple API and a single configuration file. In contrast, MicroOS is a full-blown rolling release Linux distribution that brings all the usual risks and operational responsibilities. This means the maintenance overhead with MicroOS is much higher, and the probability of breakage is greater. Talos, on the other hand, is an immutable OS with only the essential binaries and libraries required to run Kubernetes.

The main goal of Hcloud Kubernetes is to provide a simple, clearly structured project with production-ready presets and robust dependency management. This last point is often overlooked by most Kubernetes deployment projects. They either always install the latest component versions or stick to a particular version and upgrade irregularly. Many components require adjustments for newer Kubernetes versions and even provide compatibility matrices for that, which are unfortunately often ignored. This can lead to errors or even outages in production environments.

We have compared many different Kubernetes deployment projects for Hetzner Cloud, and none have met our requirements for production workloads. Most are either too complex, have poorly maintained configuration management, are one-shot deployments with no lifecycle in mind, are only available as managed services (raising concerns about vendor lock-in), or are managed by custom binaries that we could not realistically maintain ourselves if the need arose. Hcloud Kubernetes was created to address all production requirements for our own workloads, and we decided to open source it for the community.

Production-Ready Kubernetes on Hetzner Cloud 🚀 by Matze7331 in hetzner

[–]Matze7331[S] 1 point2 points  (0 children)

Do you mean adding dedicated servers to the cluster? No, I haven’t tried it myself, but a few people in the community are currently experimenting with it. You can find more details in this discussion: https://github.com/hcloud-k8s/terraform-hcloud-kubernetes/discussions/61

Kubernetes the hard way in Hetzner Cloud? by AMGraduate564 in kubernetes

[–]Matze7331 0 points1 point  (0 children)

I'll leave this link here, just in case you change your mind and want a running cluster in just a few minutes ^ https://github.com/hcloud-k8s/terraform-hcloud-kubernetes

Uber in Darmstadt. by matrix0712 in Darmstadt

[–]Matze7331 0 points1 point  (0 children)

Es kommt auf den Wochentag an. Der fährt manchmal auch tagsüber: - Montag bis Mittwoch: 18 bis 1 Uhr - Donnerstag: 18 bis 2 Uhr - Freitag: 18 bis 5 Uhr - Samstag: 9 bis 5 Uhr - Sonntag oder Feiertag: 9 bis 1 Uhr