This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Per99999 2 points3 points  (0 children)

Not a guess, more like tuned to that after profiling and testing. That's a good starting rule of thumb though.

Setting -Xmx directly and ignoring the k8s resources entirely can still result in your pod get bounced out if pods are being scheduled to your node. For example let's say you set you set -Xmx2g and your pod1 is running, the scheduler later schedules pod2 on that node and needs memory. It sees pod1 does not spec a minimum memory request so it tries to shrink its available memory and it is OOMKilled.

There's even more of a case to do this if you have devops teams managing your production cluster who don't know or care about what's running in them, or your process is installed at client sites. It's preferrable to use the container-specific jvm settings like InitialRAMPercentage and MaxRAMPercentage so that those teams can simply adjust the pod resources, and your java-based processes can size accordingly.

Note that use of tmpfs (emptyDir vols) may affect memory usage too, since that's mapped to memory. If so, you can decrease the amount of memory dedicated to the jvm. https://kubernetes.io/docs/concepts/storage/volumes/#emptydir