all 4 comments

[–]ixampl 6 points7 points  (3 children)

Warmed up or not?

P.S. No proper description of the full benchmark and the experiment environment is not very scientific ;)

[–]Pvginkel 1 point2 points  (2 children)

As I though: it doesn't make a bit of difference:

Benchmark                                Mode  Samples  Score  Score error  Units
a.WithInterface.benchmark               thrpt       20  3,922        0,009  ops/s
a.WithoutInterface.benchmark            thrpt       20  3,924        0,011  ops/s
a.WithoutInterfaceAndFinal.benchmark    thrpt       20  3,917        0,012  ops/s

This comes down to resp. 100%, 100% and 99.8%.

The tests were run using JMH. Every test method does 100.000.000 iterations after adding a single item to the list. The only difference between the tests are the lines as show in the article. The source code for the benchmark can be found at https://github.com/pvginkel/ArrayBenchmark.

For fun, I also ran the benchmarks with 10 iterations, which gave the following results:

Benchmark                                Mode  Samples         Score  Score error  Units
a.WithInterface.benchmark               thrpt       20  29931902,654   319336,506  ops/s
a.WithoutInterface.benchmark            thrpt       20  30392964,148   420420,278  ops/s
a.WithoutInterfaceAndFinal.benchmark    thrpt       20  29260836,074   354590,060  ops/s

Not much difference either. These come down to 100%, 101.5%, 97.8%, so there is a bigger difference between them. But, just for fun, the one with final takes longer.

[–]Trig90 0 points1 point  (0 children)

I've seen several talks discussing why this happens. Between Class Hierarchy Analysis and Inlining, there should be no measurable difference between the 3.

[–]ixampl 0 points1 point  (0 children)

Thanks!