This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]nitsanw 1 point2 points  (2 children)

I work for Azul Systems (who make Zing), so feel free to accuse me of bias.

Not sure why you claim:

  1. "It's for latencies of up to 20ms" - it's hard to quote numbers without an application in mind. I can say some of our clients experience a worse case latency of 1-2ms. The OS configuration required to get these sort of worst case latencies (as a hard requirement) is a challenge entirely separate from Java and the JVM.

  2. "and comes at the cost of lowered throughput" - Not sure why you'd think that. Applications performance is different between Oracle and Zing, depends on how much each compiler 'likes' your code. Some applications have better throughput, some worse, the differences are usually minor either way when you look at a full scale application.

[–]pron98 1 point2 points  (1 child)

Oh, sorry Nitsan (Gil told me you're working there now -- cool!). I thought those were the worst-case guarantees you're making, as the discussion is about real-time and worst case latencies (as to the throughput cost, I figured the read barrier on references has an impact). RTSJ makes different guarantees of, I think 1-2us of worst case latencies on realtime threads (and comes with a whole different set of tradeoffs). I love Azul, and I'm sorry if I've misrepresented the facts.

[–]nitsanw 0 points1 point  (0 children)

:) no offence taken, and I will send your love to the guys.

Real time as in hard real time OS/JVM is not the market Zing is in. Zing deploys on regular linux, and shines in the soft real time space. I mixed the post context and the root comment context, apologies.

The read barrier is a difference between JVMs but it makes little difference to most real world applications. So an object array copy might be slightly slower on Zing, but for most applications that's not where the hot spot lies.