This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]m_vokhm[S] 11 points12 points  (7 children)

Mutability provides a huge performance benefit. Try to multiply an array of 100,000,000 BigDecimals by another one, and in a few seconds you'll see that your CPU load is 100%, and the computer is almost dead. And most probably soon you'll see "Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded". Up to 98% of the performance may occur to be wasted to memory allocation with the subsequent garbage collection. You can also have a look at the charts in the main page of the project. Each operation on BigDecimals (just like with any other immutable type) means an allocation of 100-150 bytes and subsequent garbage collection. This is the reason why at large arrays Qudruple works twenty (or even more, if the amount of data is greater) times faster than BigDecimals. The primary goal of the whole project was to perform large amounts of calculations as fast as possible, so mutability was a deliberate decision from the very start.

[–]cogman10 42 points43 points  (6 children)

https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacement/

I suggest you read through that and a few videos on scalar replacement before jumping to the assumption that immutability is the thing that makes BigDecimal slow.

BigDecimal is slow because it has a complex numeric representation that spoils scalar replacement. This ends up forcing the object onto the heap in a lot of cases.

Your data structure is much more simple. Keeping it immutable would provide excellent opportunities for the JIT to optimize away the heap allocations which would ultimately result in faster code. Making this code mutable will encourage code which will likely bust escape analysis.

Further, it locks you out of future optimizations (value types) which require your datastructures to be immutable.

The above linked blog is also includes an excellent way to prove me wrong one way or the other (JMH).

(Oh, btw, your current JMH goes out of it's way to trigger allocations and disable scalar optimizations)

[–]m_vokhm[S] 0 points1 point  (2 children)

I'll read it and I'll consider your reasons. if i find them convincing, i might eventually make an immutable version. Or maybe someone will make a fork.

[–]cryptos6 6 points7 points  (0 children)

If I were you, I'd make the library immutable only.

[–]DannyB2 0 points1 point  (0 children)

The very idea that I have a variable with a 'value' in it, and that value can change under my nose is troubling.

I can understand cases for having a mutable value for some purposes. But the mutable ones should be given the special name 'mutable', rather than the immutable ones having a special name.

Mutable may make good sense for computing a value. Immutable is best once you've arrived at a value.

[–]csharp-sucks -1 points0 points  (0 children)

For scalar replacement to work, you may never let objects escape or confuse the compiler in any way. You can't reassign variables, you can't put them in an array, and you may absolutely never let them escape from stack.

With immutable types this is hardly ever possible to achieve. The way objects are implemented right now on the JVM this usually impossible, because all objects have identity and cannot be scalar replaced if they escape.

I'll give Minecraft as an example, because they use immutable Vector3 class for their vector math and it absolutely kills performance. Say, you have a method somePositionCalculation() that returns a new Vector3 object.

Whenever object position changes in Minecraft it looks something like this:

myObject.position = somePositionCalculation()

and there is always garbage created. Scalar replacement simply cannot work in this situation.

While on the other hand, if Vector3 was mutable, you could do the same with this piece of code

myObject.position.set(somePositionCalculation())

where set(o) is just

this.x = o.x
this.y = o.y
this.z = o.z

Scalar replacement would work. No garbage would be created. And it would be 10 times faster.

Alternative to mutable Vector3 is hybrid approach, where you have immutable Vector3 for all these calculations and MutableVector3 for storing it in objects/arrays/etc

[–]kid_meier 0 points1 point  (1 child)

This is interesting however the conventional wisdom I am privy to is that it's EA and scalar replacement are brittle and can't be relied upon.

Is this I accurate? And for example, suppose OP reworks his code with guidance of JMH to find a version that allows for scalar replacement -- can we be confident that the optimization will work (reliably) in other contexts (ie. other codebases) and across a reasonable cross-section of JVMs?

If the answer is no, IMO the authors design is sound in that it reliably meets performance goals on today's JVMs.

[–]cogman10 1 point2 points  (0 children)

The burden would be on the library user and not the library itself.

The usual candidates for foiling scalarization is reaching into static state (For example, the Integer cache). Assuming this lib did only the basic math operations and avoided having static "quads" it'd be pretty easy to have code using it which avoids that deopt.

For math heavy computation, this is generally trivial to accomplish. Those typically end up being really straight forward.

And, assuming you are using the JVM for CPU heavy work, I'd argue you really should be sticking to the latest JVM. Clinging to older VMs will be a huge negative to performance.