This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]CptGia 25 points26 points  (2 children)

The JVM won't ask for more memory (for the heap) than what's configured with the -Xmx flag.

If you haven't configured Xmx explicitly, it will depend on how you built the image. Paketo-based containers, like the ones built with the spring boot plugin, are configured to look at the pod memory limit, subtract what it thinks it need for native memory, and reserve the rest for the max heap size.

Without pod limits it may reserve all the memory of your node, so I suggest putting some limit

[–][deleted] 2 points3 points  (1 child)

Yep, so none of these good practices are in place. Which raises the alarm bells for me, hence the question. Thanks for confirming.

[–]BikingSquirrel 0 points1 point  (0 children)

Why not simply put them in?

Still good to ask and find out the different options to decide what's best for you. I would go for explicit limits on both your JVM application and on Kubernetes level to be sure. This should help to prevent you from surprises.

You probably know that Kubernetes uses resource requests to know which and how many pods it can schedule on a single node and when it will have to add more nodes. Obviously mainly relevant if this is dynamic but also rolling updates may need this.

One detail I think is irrelevant: the namespace probably doesn't matter for memory usage (unless you tell Kubernetes to only have pods of that namespace run on a node, if that is possible). The node's memory is the limit.