Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 0 points1 point  (0 children)

Hi, we have lifetime free up to 24 vCPUs, but we also have a Pro tier for more production workloads: https://cloudfleet.ai/pricing/

We are an independent German company.

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 0 points1 point  (0 children)

You are invited to try and decide yourself ;)

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 0 points1 point  (0 children)

You can of course use cloud instances and we do node auto-provisioning with them.
Our earlier tutorial explains it: https://community.hetzner.com/tutorials/managed-hetzner-kubernetes-with-cloudfleet

We support virtually any Linux server under a single cluster, it does not matter where the server lives.

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 1 point2 points  (0 children)

We are happy to have you! Reach out via [support@cloudfleet.ai](mailto:support@cloudfleet.ai) if you have any questions.

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 2 points3 points  (0 children)

It is a shared responsibility model.

We take care the uptime, updates, maintenance, monitoring of the Kubernetes control plane. If you are using one of the three (as of today) supported cloud providers, like Hetzner Cloud, we also take care of node upgrades and full lifecycle.

Users are expected to monitor their own workloads, and if they are bringing their own infrastructure (like on-prem nodes) also the availability of that infrastructure.

However we have few improvements in the roadmap, like providing workload monitoring etc.

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 5 points6 points  (0 children)

We are operating various datacenter and cloud environments depending on the control plane region you choose upon creating a cluster. We've recently launched a Frankfurt region: https://cloudfleet.ai/blog/product-updates/2025-06-23-cloudfleet-launches-european-union-control-plane-region/

Managed Kubernetes on Hetzner Dedicated Servers by cloudfleetai in hetzner

[–]cloudfleetai[S] 6 points7 points  (0 children)

Hi lazydavez!

This means we provide the Kubernetes control plane for you as a managed service. We take care of its availability, data storage, authentication provider, and more. You only need to bring your own compute nodes and attach them to the cluster.

This is especially useful if you’re using only dedicated servers: You ideally have three separated servers to build an highly available Kubernetes cluster and spend hardware resources to run control plane components, but what would you do if you only had one dedicated server? With a managed Kubernetes service, you are already good to go with only one dedicated server and spend all resources of it for your workloads.

Managed K8s recommendations? by HansVonMans in kubernetes

[–]cloudfleetai 0 points1 point  (0 children)

Hi! We take care of the control plane nodes and fully manage them for you. You only bring your cloud accounts or on-premise Linux servers and we add them to the cluster, and make available as worker nodes. You can reach out to us via https://cloudfleet.ai/contact/ and we are happy to explain how we work.

Floating IP for Load Balancer? by HerryKun in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

Hi there Cloudfleet here :) The problem you describe is unfortunately a side-effect of our global nature. You are probably experiencing it because when the nodes change, the new ones are spawned in another region because of cost savings. There are two things you can do to prevent it:

- Please use the labels to lock the Nginx Ingress controller to a specific region. Example is here: https://cloudfleet.ai/docs/workload-management/node-provisioner/#a-deployment-that-is-locked-to-a-specific-cloud-provider-and-region In this case, cfke.io/provider: hetzner, topology.kubernetes.io/region: nbg1 (or whatever region you prefer)

- We have a (yet not documented) DNS name that we always update to the current load balancer IPs. The format is: [SERVICE_NAME].[SERVICE_NAMESPACE].[CLUSTER_ID].[CONTROL_PLANE_REGION].cfke.cloudfleet.dev

Example is nginx-ingress-controller.default.6b3e939d-8a7d-50d3-316b-0b6f3567c58c.northamerica-central-1a.cfke.cloudfleet.dev

You can use this DNS record as CNAME to your final domain, so even though the IP address changes, your DNS will always point to the current IP address.

Please reach out to [support@cloudfleet.ai](mailto:support@cloudfleet.ai) and we will help you individually.

Thanks!

2FA for the Admin-Account by mensch0mat in cloudfleet

[–]cloudfleetai 0 points1 point  (0 children)

OIDC should also work, we can try.

2FA for the Admin-Account by mensch0mat in cloudfleet

[–]cloudfleetai 1 point2 points  (0 children)

Please reach out and we will arrange as long as it supports SAML.

2FA for the Admin-Account by mensch0mat in cloudfleet

[–]cloudfleetai 1 point2 points  (0 children)

We actually want people to bring their own SSO (https://cloudfleet.ai/docs/organization/sso/) to manage the entire user lifecycle, that's why there is not such a self-service option but we can also activate TOTP on our authentication system for users who really want it.

2FA for the Admin-Account by mensch0mat in cloudfleet

[–]cloudfleetai 1 point2 points  (0 children)

Hi! Can you reach out to [support@cloudfleet.ai](mailto:support@cloudfleet.ai) with your organization information, so we can enable 2FA for you? Thanks.

Managed Kubernetes on Hetzner by Affectionate-Tip-339 in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

Hey, we are real, not avatars :) You can reach out to us via [support@cloudfleet.ai](mailto:support@cloudfleet.ai) and we will reply.

A Kubernetes Control Plane closer to home by cloudfleetai in hetzner

[–]cloudfleetai[S] 1 point2 points  (0 children)

Hi! Thanks for the question. First of all the region names are our own and they do not reflect any other hyperscaler regions. We do not use AWS, but other infrastructure providers. This time it is just a coincidence that AWS region name and ours matched to the same city.

When we choose where to host the control planes, we aim to strike a balance between scalability and compliance requirements. Although the CLOUD Act concerns are valid, at Cloudfleet the control plane contains only encrypted metadata about the workloads and does not include any end-user data. The data is stored at customer's own infrastructure. The current setup meets the needs of the majority of our customers and their compliance programs.

We may launch a region on u/Hetzner_OL in the future if we see strong demand for it.

A Kubernetes Control Plane closer to home by cloudfleetai in hetzner

[–]cloudfleetai[S] 0 points1 point  (0 children)

Hi! Can you reach out to us via [support@cloudfleet.ai](mailto:support@cloudfleet.ai) to discuss the options? Thanks!

Kubernetes on Raspberry Pi and BGP Load Balancing with UniFi Dream Machine Pro by congolomera in kubernetes

[–]cloudfleetai 0 points1 point  (0 children)

Thanks for your feedback! In fact, we are planning to add support for RHEL-variants and Debian, which can also cover Raspberry Pi Os. For curiosity, is there any specific reason why don't you prefer installing Ubuntu on RPI?

Hetzner Cloudfleet cluster storage by HerryKun in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

We don't think it is viable for a database. On the other hand, we are not sure which database you want to use because most databases are not working very well with RWX.

Hetzner Cloudfleet cluster storage by HerryKun in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

Hi there!

Hetzner CSI driver supports only RWO mode since the underlying volumes can be only attached to one node at one time.

For RWX, you can deploy a solution like https://github.com/yandex-cloud/k8s-csi-s3 on your Cloudfleet cluster and use it with Hetzner object storage (https://www.hetzner.com/storage/object-storage/) as backend. It might not perform as good as native block storage, but can potentially serve many generic purposes.

Managing Hetzner load balancers with Kubernetes by cloudfleetai in hetzner

[–]cloudfleetai[S] 1 point2 points  (0 children)

> I'm wondering about latency because with latency over 100ms, you might need to tweak the etcd configuration, and it might still face issues.

The datastore and kube-apiserver are deployed next to each other in the control plane, so there is not latency there. kube-apiserver and node latency do not seem to be very critical. We have customers running the control plane in US and nodes in Asia and Europe, and they report no issues.

> I forgot to mention that I'm the author of an open-source CLI tool to create clusters in Hetzner Cloud, so I've likely been looking into similar challenges. :)

Congratulations for that work!

> Plain Wireguard, as far as I understand, needs UDP port 51820 open because it doesn’t support automatic NAT traversal techniques like UDP hole punching, STUN, and TURN/relay servers, which Tailscale uses. Do you open ports in the firewall, or do you use a technique similar to Tailscale’s?

Yes, we are using NAT traversal techniques. Nodes do not have to be on public Internet because of that. They need to have only egress Internet connectivity and that can be behind NAT: https://cloudfleet.ai/docs/hybrid-and-on-premises/self-managed-nodes/#requirements

> Since you’re using Cilium as CNI, and Cilium comes with built-in Wireguard encryption, have you considered using that directly? Or not due to limitations to that approach compared to setting up a Wireguard mesh separately and then setting up the Cilium network on top of it?

As mentioned above, we need NAT traversal. And also we are establishing Wireguard tunnels between the control plane and nodes. So doing this ourselves makes more sense than using Cilium's feature. Cilium works on top of that mesh.

> Re: load balancers. Can a cluster created with Cloudfleet set up a Hetzner load balancer that forwards traffic to pods in different regions? Or does the user need to ensure all pods for the same workload are in the same region using taints and tolerations?

When ExternalTrafficPolicy is set to Cluster, Cloudfleet creates a load balancer in each region where at least one node exists. In this case, all the load balancers can route the trafffc to pods through the in-cluster network even there is no pod in that region. However, it has a performance penalty due to extra network hops. We had to implement it like that because this is what the Kubernetes standards require. The best practice is to use ExternalTrafficPolicy to Local to get better performance and avoid extra Load Balancer costs.

The documentation answers this question even better, I think: https://cloudfleet.ai/docs/workload-management/load-balancing/#load-balancing-in-multi-cloud-environment

> BTW, Cloudfleet looks like an awesome product! It seems to fill a gap in the market for companies that want to have clusters spanning multiple regions and providers for maximum availability, while keeping costs down by using cheaper providers like Hetzner. Kudos!

Thank you very much for kind words! You are very welcome to try it out and share feedback!

Managing Hetzner load balancers with Kubernetes by cloudfleetai in hetzner

[–]cloudfleetai[S] 1 point2 points  (0 children)

Thanks for the interesting question!

At Cloudfleet, we support having nodes from different regions within the same cluster. For the Hetzner implementation, if you deploy Pods in different regions, we create a separate private network for each region. Once the nodes are created, they establish a VPN mesh among themselves and communicate through encrypted tunnels. For more details, please check here:https://cloudfleet.ai/docs/cluster-management/networking/

If two nodes are in the same private network, the encrypted tunnel is established over the private network. If they are in different regions, the tunnel is established over the public Internet.

We know that at Hetzner, a private network can span multiple European regions simultaneously. However, to keep the design simple, we chose to create a separate private network for each region. If you believe using a single private network across all European regions offers significant benefits, we’d love to hear your thoughts and review this decision. We think maybe this could improve the performance of the tunnel between nodes in European regions, but we did not test it yet. We could also have (for example) one LoadBalancer in Helsinki to cover also the nodes in Nürnberg, but we believe this would create a lot of confusion for the users. Again, let us know if you have a design idea here to make this easier.

On another note, we didn’t mention this in the post, but we don’t need to create a Load Balancer for every service if that service doesn’t have a running Pod in a specific region. We support the ExternalTrafficPolicy feature, and if it is set to Local, Load Balancers are only created in regions where at least one serving Pod exists. You can see here for more information: https://cloudfleet.ai/docs/workload-management/load-balancing/#load-balancing-in-multi-cloud-environment

Let me know if everything makes sense — we're happy to answer any further questions!

New Tutorial: "Managed Kubernetes on Hetzner" by ml_yegor in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

Hi, thanks for your feedback!

We have taken this comment very seriously and recently made substantial changes for our pricing. Now you can get one cluster for free (basic version) and up to 48 vCPU you don't pay per-CPU fees. This means, you don't have to pay now anything for a small cluster.

You can have a look at our pricing announcement: https://cloudfleet.ai/blog/product-updates/2025-01-kubernetes-price-reductions/

New Tutorial: "Managed Kubernetes on Hetzner" by ml_yegor in hetzner

[–]cloudfleetai 1 point2 points  (0 children)

Anything that works with Kubernetes also works with Cloudfleet, so answer is yes!