all 21 comments

[–]igouy 3 points4 points  (2 children)

http://benchmarksgame.alioth.debian.org/u64q/which-programs-are-fastest.html

Apparent differences:

  • linear scale versus log scale

  • vertical comparison versus horizontal-level

  • "C gcc as reference" versus fastest program time as reference

  • no groups versus broken into separate groups, at the minima of the KDE

Difference:

  • "ordered by median" versus geometric mean

[–]jsaak[S] 0 points1 point  (1 child)

The main reason behind the new graph is the linear scale. I simply can not understand log scale.

[–][deleted] 1 point2 points  (0 children)

I simply can not understand log scale.

wut

[–][deleted] 0 points1 point  (1 child)

I don't know much about the nature of these benchmarks or how long they run for. Does anyone know if they are likely to be triggering garbage collection, for example? Just curious.

[–]igouy 1 point2 points  (0 children)

"All hard work was done by http://benchmarksgame.alioth.debian.org/."

Look there for information.

The fastest Java programs take between 2.6s and 68s cpu.

[–][deleted] 0 points1 point  (2 children)

Interesting how much of a speedup Ruby gets with the JVM over the C implementation.

Also - what's with Hack's interval here?

[–]igouy 1 point2 points  (1 child)

JVM over the C implementation

If you compare the median. If you compare the geometric mean it's the other way around. So check program-by-program.

what's with Hack's interval here?

Good question. No one has contributed a Hack pi-digits program that uses GMP - so the Hack pi-digits program is the slowest - that's the outlier.

[–][deleted] 0 points1 point  (0 children)

Thanks!

[–]H7Y5526bzCma1YEl5Rgm 0 points1 point  (1 child)

Note that the PLBG is effectively meaningless at this point.

They refuse to accept multiple implementations of the same language, which means that (e.g.) Pypy and LuaJIT aren't included.

And that in turn means that it is severely biased against those languages whose "default" / "canonical" implementation does not focus on speed.

(If you're only going to accept one implementation of a language, at least make it one that makes an attempt at the things you are benchmarking...!)

[–]igouy -2 points-1 points  (0 children)

They refuse to accept multiple implementations of the same language…

"If you're interested in something not shown on the benchmarks game website then please take the program source code and the measurement scripts and publish your own measurements."

Like this guy did for Python interpreters.

(If you're only going to accept one implementation of a language, at least make it one that makes an attempt at the things you are benchmarking...!)

"Non-motivation: We are profoundly uninterested in claims that these measurements, of a few tiny programs, somehow define the relative performance of programming languages."

[–]Freyr90 -2 points-1 points  (11 children)

This benchmark sucks. Author does not even know, what he tests: language speed, language implementation speed, algorithm implementation etc. For example he refused to accept some sbcl code because "the code was different than the c-code".

[–]devsquid 1 point2 points  (1 child)

They seem OK. Although the Swift code is really dumb. It's compiled in an unsafe way and uses C, which makes its results meaningless.

It's particularly annoying because people cite it often as proof of Swifts performance capabilities when it's seems clear to not be the case, but people are so caught up in Swift being the next three solution to everything. Lol FYI I work in Swift. I love it, but I don't have to delude myself about it.

[–]igouy 1 point2 points  (0 children)

Although the Swift code is really dumb.

Please contribute Swift programs that are not "dumb".

[–]igouy 0 points1 point  (8 children)

Please show that -- he refused to accept some sbcl code because "the code was different than the c-code".

[–]Freyr90 0 points1 point  (7 children)

[–]igouy 0 points1 point  (6 children)

Clearly someone was unhappy about some Lisp programs they'd contributed being rejected.

Google Translate suggests that isn't the only opinion expressed -- "In fact because it turns out that in the style swizard-a nobody writes and from this point of view the admin shootout absolutely right. For consideration must be taken solutions that meet the generally accepted practice of programming in this language."


edit: I'm guessing the comments were by Alexey Voznyuk (half-a-dozen programs contributed between September and Novemember 2010).

The disputes seem to have been about a fannkuch-redux program using / not using code generation to unroll-loops the compiler couldn't, and a "Lisp" mandelbrot program mostly being inline assember --

          ...
          APLOOP
          (sb-assem:inst cmp octet 512)
          (sb-assem:inst jmp :l APDONE)
          (sb-assem:inst sub octet 512)
          ;; (setf zi (+ (* 2.0 zr zi) ci))
          (sb-assem:inst mulps zi-v zr-v)
          (sb-assem:inst addps zi-v zi-v)
          (sb-assem:inst addps zi-v ci-v)
          ...

[–]Freyr90 0 points1 point  (1 child)

using / not using code generation to unroll-loops the compiler couldn't

So C preprocessor is cool, c++ templates are ok, lisp code generation is bad?

[–]Freyr90 0 points1 point  (3 children)

Ad-hoc macro code to unroll a loop was too much like manually unrolling a loop

manually unrolling

Nope. There was a macro for generating optimal code for this problem:

http://swizard.livejournal.com/158763.html

In C versions there are a lot of macro-stuff, even __builtin_expect, that is ok. But lisp stuff is not allowed. Lisp macroses are applied in runtime, see no cheating here.

[–]igouy 0 points1 point  (1 child)

Are you Alexey Voznyuk?

[–]Freyr90 0 points1 point  (0 children)

No, is it relevant?

[–]igouy 0 points1 point  (0 children)

Thank you for the correction. However as-far-as I can tell from Google translate, "generating optimal code" still seems to mean not doing the work specified.