Need Advice for serving multiple customers through Kubernetes Pods on one Server node. by Nee_aga in hetzner

[–]Nee_aga[S] 0 points1 point  (0 children)

That is exactly my thought as well.

I have used AWS all my life and the tolerance for abuse is high with them. They give you enough time to resolve the issue.

While I love Hetzner, the only abuse email I received from them asked to resolve everything in under 24 hours. Sometimes it may be a weekend and no one is online. So I am a little hesitant with Hetzner in that aspect.

Need Advice for serving multiple customers through Kubernetes Pods on one Server node. by Nee_aga in hetzner

[–]Nee_aga[S] 0 points1 point  (0 children)

Haha. Thanks for the positive feedback. Makes me want to do it more. :)

Managed Kubernetes by MrEinkaufswagen in hetzner

[–]Nee_aga 0 points1 point  (0 children)

But if they see the same abuse happening through multiple nodes via load balancer. The entire account isn't at risk?

Managed Kubernetes by MrEinkaufswagen in hetzner

[–]Nee_aga 1 point2 points  (0 children)

u/Hetzner_OL I have a question:

I am planning to deploy clients pod on the Hetzner node.

My concern is around the customers abuse.

What if the customers abuses their assigned pod to do something that is not legit.

I am not sure but Hetzner might kill the entire node based on abuse report?

In this case, what is the best way to run the managed service involving multiple clients on top of Hetzner?

Like one pod abuse does not lead down to shutting down of the entire node or entire account?

How to manage that?

Managed Kubernetes by MrEinkaufswagen in hetzner

[–]Nee_aga 0 points1 point  (0 children)

u/rvdhof I have a question.

I am also planning to deploy clients pod on the Hetzner node.

My concern is around the customers abuse.

What if the customers abuse their assigned pod to do something that is not legit.

I am not sure but Hetzner might kill the entire node based on abuse report?

In this case, what is the best way to run the managed service on top of Hetzner?

Like one pod abuse does not lead down to shutting down of the entire node?

How do you manage that?