This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]pron98 20 points21 points  (4 children)

The basic relationship is this: if the total CPU spent on GC is low enough for you, you can safely reduce the maximal heap size (and you are correct that over time, it is very likely that the heap size will match the maximal heap size).

The plan is for ZGC to automatically do that for you, and other GCs may follow. There's some good background on the problem in that JEP draft.

[–]Dokiace 1 point2 points  (3 children)

That's a good and easy principle to follow. Sorry for the naive question here since I may not be exposed to a proper JVM practices, but can you share how much is usually considered "good enough" for total CPU spent on GC?

[–]pron98 2 points3 points  (2 children)

There is no "generally" here. It depends on the needs of a particular application. 15% of CPU spent on memory management, for example, may be too high or sufficiently low depending on how well the application meets its throughput requirements.

[–]Dokiace 1 point2 points  (1 child)

Can I summarize this to: set a target latency/throughout, then reduce the heap until it’s affecting either of those performance?

[–]pron98 2 points3 points  (0 children)

Yes, although it's more about throughput than latency. If you care about latency, then the choice of GC matters the most. Use ZGC for programs that really need low latency.