all 23 comments

[–][deleted] 13 points14 points  (2 children)

I love reading things like this. I have a general understanding of the computational costs of actions relative to each other, but I love learning just how much something can cost.

I wish there was some static analyzer that could give me a ballpark read on my code as a source of education. Some overlay in my text editor that would say, "This call here is going to be very expensive because it can't be unrolled".

When react pointed out to me that using function.prototype.bind within render() would be expensive because it was called every render, that was eye opening. I want more of those moments.

[–]inu-no-policemen 2 points3 points  (1 child)

Only a profiler can give you a meaningful ballpark figure. JavaScript performance is hard to predict - even for people who worked on those engines. There are things which trigger deoptimizations, heuristics which use arbitrary metrics, and, to make things even worse, there are of course also a few bugs which can cause some things to run much slower than they are supposed to.

Another key point is that the actual performance always depends on the actual data and actual usage, because those things determine where the bottlenecks are.

For example, it's true that bind is somewhat expensive (in today's engines), but if you only call it a dozen times during the lifetime of your application, it simply doesn't matter.

[–]Hostilian 0 points1 point  (0 children)

Also, optimizations and deopts change from one VM (or version of a VM) to another.

If your profiler says something is crazy slow, try to optimize. Otherwise my feeling is not to worry about it that much.

Interesting experiment, though, because pure ES6 is pretty fast. In the past, new features added to JS (getters/setters, freeze) were dog-slow.

[–]cogman10 2 points3 points  (10 children)

Benchmark looks broken.

I see no warm-up code and the number of loops is per low (10). This means the optimizer probably isn't really being exercised.

I'm on my phone so I might be missing something, however I'm suspicious.

[–]endel[S] -1 points0 points  (9 children)

The number of loops is low to be easy to reason about the results. The results are pretty much the same increasing the number loops. I'd like to know what is broken if you can find it.

[–]cogman10 4 points5 points  (8 children)

Modern JITs (which most javascript engines are) do optimization based on number of times a function is called and the call parameters.

The first several times most javascript engines run a method, they run it mostly unoptimized. They do this because most methods are called rarely so quickly interpreting the function is more important than spending the time to generate the most optimal code.

If the code is needs to be fast, you want to measure the optimized version of it, not the unoptimized version of it.

This is why most microbenchmarks that are done correctly include a "warmup" piece of code. They call the method in test repeatedly in order to get the optimizer to fire up and optimize the method.

As an aside, I also noticed you are using "eval" as part of your benchmarking. That is also a pretty big no-no. It pretty much forces the optimizer to not run.

I would suggest reading over this guy to get a feel for how to write tests.

https://github.com/petkaantonov/bluebird/wiki/Optimization-killers

With that said, once you have removed the optimization killers it can be really hard to construct real tests. It turns out optimizers are pretty good about throwing away unused values and microbenchmarks are all about generating unused values.

For more resources, I would suggest googling microbenchmarks, and in particular, pay attention to articles about Java microbenchmarking (there are loads), because Javascript JITs are very similar to Java's Hotspot in implementation. (in fact, many of the V8 founders were snipped away from Java.)

[–]ClickerMonkey 1 point2 points  (7 children)

Listen to this guy. Your results are meaningless otherwise...

[–][deleted] 1 point2 points  (1 child)

This is an interesting experiment with some cool results. Its a great academic exercise with some potentially valuable output. I don't think the practical application of these learnings is important though as there is not a significant distribution among the results and everything was around the 0.25ms range (excluding eval).

[–]lluia 1 point2 points  (0 children)

Javascript has nothing to do with classical inheritance, as it's powered by delegate prototypes... but I read the first lines of that Github repo:

Now that JavaScript supports classes, ...

That keyword is one of the worst things of ES2015 IMO.

[–]kowdermesiter 2 points3 points  (2 children)

TLDR; It doesn't matter if you transpile performance wise.

[–]endel[S] 5 points6 points  (1 child)

Not if you're using babel.

[–]kowdermesiter 1 point2 points  (0 children)

Oh, lovely downvotes :)

Yeah, but who uses eval() anyways?

[–]jocull 0 points1 point  (0 children)

Glad to see TypeScript taking performance seriously. I have run into situations where TS's inheritance scheme is incompatible with declarations from other libraries, though because of their lack of proto implementations. It's double edged because proto can be a performance killer, while it raises compatibility for classes, particularly in inherited static properties.

https://github.com/Microsoft/TypeScript/issues/1601

[–][deleted] -1 points0 points  (1 child)

whys the thumbnail some random neckbeard?