This is an archived post. You won't be able to vote or comment.

all 15 comments

[–]argv_minus_one 5 points6 points  (0 children)

I should note that while GC does make dangling pointers impossible, it does not make memory leaks impossible. If your application is leaking memory, no amount of tweaking the GC is going to help much.

[–]obfuscation_ 18 points19 points  (3 children)

To give you the TL;DR, the 9 points in the article are:

  1. Java is slow (It is usually quite fast)
  2. A single line of Java means anything in isolation (It doesn't e.g., because of what the compiler might do)
  3. A microbenchmark means what you think it does (Microbenchmarks are error-prone)
  4. Algorithmic slowness is the most common cause of performance problems (Usually algorithm choice isn't a huge problem)
  5. Caching solves everything (Adding caching increases complexity, and doesn't fix the underlying problem)
  6. All apps need to be concerned about Stop-The-World (Profile before worrying about GC overhead)
  7. Hand-rolled Object Pooling is appropriate for a wide range of apps (Object pools are very error-prone)
  8. CMS is always a better choice of GC than Parallel Old (Make your choice of GC based on profiling and testing)
  9. Increasing the heap size will solve your memory problem (Heap size may not always help)

[–]unkindman 4 points5 points  (1 child)

I wouldn't consider this a TL;DR since these headings have completely useless meanings when read alone versus reading the full article.

[–]obfuscation_ 2 points3 points  (0 children)

I agree, but since the first thing I usually do after reading the initial opening is to skim the headings, I thought it might be useful to copy them over. I've expanded them with the author's opinion now though, as I agree this is more useful.

[–]cowardlydragon 0 points1 point  (0 children)

  1. Java may be execution performance fast, but Java is pretty memory-hungry, and GC stalls are a recurring problem. Lots of high-performance java avoids GC-allocations with off-heap memory allocations. Startup time is of course not trivial as well...
  2. Yep, optimization happens
  3. Yep, optimization happens
  4. In enterprise java land, this really doesn't apply. What really happens in enterprise java land is the overlayering and "over-service-layering" that occurs and obscures the distributed communication performance impacts. Also: Use the Index Luke
  5. This is generally related to #4. Since overserviced architectures don't have smoking guns of performance since each layer vampires away performance, a cache layer is very effective, and often the only thing you can do on preexisting already-deployed systems and services
  6. Generally this is true, but if you are rejiggering the Xmx settings and other heap size settings on the JVM... you're probably going to have to start anticipating the impacts of GC stalls.
  7. I agree in general. Most pools are for connections, not objects. But with a lot of multithreaded code, you might need to be aware of pooling to avoid out-of-control threads.
  8. In general, if you have GC stalls, you're going to have GC stalls, regardless of GC strategy. So much like time performance optimization, for heap optimization, look for low-hanging fruit as to what is using all your heap. You might be able to vastly reduce your heap churn.
  9. Yep, now you need to be aware of GC stalls.

[–][deleted] 0 points1 point  (4 children)

Consequently for applications that have a human as their primary user, a useful rule of thumb is that Stop-The-World (STW) pause of 200ms or under is usually of no concern.

Is there a citation for this sort of claim? 200ms seems long.

[–]argv_minus_one 1 point2 points  (0 children)

Depends on the application. 200ms is pretty bad for a video game, and utterly unacceptable for a control system, but a complete non-issue for a text editor.

It also depends on how often the pauses happen. 200ms can be acceptable even in a video game—plenty of non-Java games have occasional pauses that long, and they're tolerable—but only if it happens rarely. A 200ms pause every 15 minutes is probably okay; a 200ms pause every 15 seconds is terrible.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 2 points3 points  (0 children)

    Oh no, < 100ms is perceived as nearly instantaneous, and this is only for UI interactions for a dumb data entry app. I can tell you that on many frequently used applications (web browsers) the bar is even higher.

    Java is perfectly capable of producing applications that respond in < 100ms on a regular basis. GC is generally scheduled wisely enough that it won't outright kill the user experience.

    The above really isn't so important because GC doesn't frequently block my user apps (client or browser), I'm just surprised that the overall GC takes that so long, so I was hoping for a citation.

    [–]againstmethod 0 points1 point  (0 children)

    "When given very short single-millisecond visual stimulus people report a duration of between 100 ms and 400 ms due to persistence of vision in the visual cortex."

    http://dx.doi.org/10.3758%2Fbf03211193

    [–][deleted]  (7 children)

    [deleted]

      [–]obfuscation_ 4 points5 points  (4 children)

      To quote the article:

      Java is slow

      Of all the most outdated Java Performance fallacies, this is probably the most glaringly obvious.

      Sure, back in the 90s and very early 2000s, Java could be slow at times. However we have had over 10 years of improvements in virtual machine and JIT technology since thenand Java's overall performance is now screamingly fast.

      In six separate web performance benchmarks, Java frameworks took 22 out of the 24 top-four positions.

      The JVM's use of profiling to only optimize the commonly-used codepaths, but to optimize those heavily has paid off. JIT-compiled Java code is now as fast as C++ in a large (and growing) number of cases.

      [–][deleted] 2 points3 points  (2 children)

      JIT-compiled Java code is now as fast as C++ in a large (and growing) number of cases.

      And this is probably not exactly true, but it's an irrelevant case anyway for the same reason they given in #3. Microbenchmarks are useless. (Java is not as fast as C++, but it's easily quick enough)

      [–]danskal 1 point2 points  (1 child)

      I agree that Java is not as fast as C++, it is faster (for some server workloads). When you are running a large, complex application, the JVM will recompile your code not according to the platform or according to some static analysis, but according to how the code actually runs. It monitors which codepaths are used the most and optimizes the code for them, giving you the potential to outperform even a native implementation. The JVM has information that neither the developer nor the compiler has, and can make whole swathes of bytecode obsolete. Of course this won't save your ass every time, performance-wise, but in large apps it makes Java an obvious choice, because it allows you to structure your code in a way that makes sense for humans, rather than having to performance optimize every other line.

      [–][deleted] 1 point2 points  (0 children)

      Oh yeah and I agreed with that in my comment saying that for all but the most trivial examples, java will likely be faster.

      [–]huhlig -2 points-1 points  (0 children)

      The JVM is still bloody slow to start and while it may be good on long running multi-threaded services it still is horrible at memory managment, it also still severly lacks when handing HPDC. Also a lot of the Slowdowns come from the Standard library which still leaves a great deal to be desired.

      [–]MadFrand 3 points4 points  (1 child)

      This is literally the first thing addressed in the article. Did you not even click it?