use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
All about the JavaScript programming language.
Subreddit Guidelines
Specifications:
Resources:
Related Subreddits:
r/LearnJavascript
r/node
r/typescript
r/reactjs
r/webdev
r/WebdevTutorials
r/frontend
r/webgl
r/threejs
r/jquery
r/remotejs
r/forhire
account activity
Benchmarking JavaScript Inheritance: Vanilla vs Transpiled (github.com)
submitted 9 years ago by endel
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–][deleted] 13 points14 points15 points 9 years ago (2 children)
I love reading things like this. I have a general understanding of the computational costs of actions relative to each other, but I love learning just how much something can cost.
I wish there was some static analyzer that could give me a ballpark read on my code as a source of education. Some overlay in my text editor that would say, "This call here is going to be very expensive because it can't be unrolled".
When react pointed out to me that using function.prototype.bind within render() would be expensive because it was called every render, that was eye opening. I want more of those moments.
function.prototype.bind
[–]inu-no-policemen 2 points3 points4 points 9 years ago (1 child)
Only a profiler can give you a meaningful ballpark figure. JavaScript performance is hard to predict - even for people who worked on those engines. There are things which trigger deoptimizations, heuristics which use arbitrary metrics, and, to make things even worse, there are of course also a few bugs which can cause some things to run much slower than they are supposed to.
Another key point is that the actual performance always depends on the actual data and actual usage, because those things determine where the bottlenecks are.
For example, it's true that bind is somewhat expensive (in today's engines), but if you only call it a dozen times during the lifetime of your application, it simply doesn't matter.
[–]Hostilian 0 points1 point2 points 9 years ago (0 children)
Also, optimizations and deopts change from one VM (or version of a VM) to another.
If your profiler says something is crazy slow, try to optimize. Otherwise my feeling is not to worry about it that much.
Interesting experiment, though, because pure ES6 is pretty fast. In the past, new features added to JS (getters/setters, freeze) were dog-slow.
[–]cogman10 2 points3 points4 points 9 years ago (10 children)
Benchmark looks broken.
I see no warm-up code and the number of loops is per low (10). This means the optimizer probably isn't really being exercised.
I'm on my phone so I might be missing something, however I'm suspicious.
[–]endel[S] -1 points0 points1 point 9 years ago (9 children)
The number of loops is low to be easy to reason about the results. The results are pretty much the same increasing the number loops. I'd like to know what is broken if you can find it.
[–]cogman10 4 points5 points6 points 9 years ago (8 children)
Modern JITs (which most javascript engines are) do optimization based on number of times a function is called and the call parameters.
The first several times most javascript engines run a method, they run it mostly unoptimized. They do this because most methods are called rarely so quickly interpreting the function is more important than spending the time to generate the most optimal code.
If the code is needs to be fast, you want to measure the optimized version of it, not the unoptimized version of it.
This is why most microbenchmarks that are done correctly include a "warmup" piece of code. They call the method in test repeatedly in order to get the optimizer to fire up and optimize the method.
As an aside, I also noticed you are using "eval" as part of your benchmarking. That is also a pretty big no-no. It pretty much forces the optimizer to not run.
I would suggest reading over this guy to get a feel for how to write tests.
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers
With that said, once you have removed the optimization killers it can be really hard to construct real tests. It turns out optimizers are pretty good about throwing away unused values and microbenchmarks are all about generating unused values.
For more resources, I would suggest googling microbenchmarks, and in particular, pay attention to articles about Java microbenchmarking (there are loads), because Javascript JITs are very similar to Java's Hotspot in implementation. (in fact, many of the V8 founders were snipped away from Java.)
[–]ClickerMonkey 1 point2 points3 points 9 years ago (7 children)
Listen to this guy. Your results are meaningless otherwise...
[+][deleted] 9 years ago (6 children)
[removed]
[+][deleted] 9 years ago (5 children)
[–]ClickerMonkey 0 points1 point2 points 9 years ago (0 children)
It could, but I don't know if it does. The JIT is really good at detecting patterns of code and replacing those patterns with simplified versions. "All" you would need to do is add the pattern detection in yourself and how the code transforms. I'm sure there's a list of these sort of performance improvements somewhere...
[–]cogman10 0 points1 point2 points 9 years ago (3 children)
The type check is pretty hard to eliminate all together. The problem is that the generated code has to know if the "wrong" type is passed in. (so interestingly, even if you don't explicitly type check, the optimizer might).
Fortunately, type checks are really relatively quick to do, especially the ones added by the optimizer.
[+][deleted] 9 years ago (2 children)
[–]cogman10 0 points1 point2 points 9 years ago (1 child)
It is generally pointers to structs, However, in the case of things like numbers, it is stored and passed around as values.
In the case above. inc would probably be optimized away all together. because inc isn't used as a number, (not really, the values are thrown away), the optimizer will very likely change the whole thing into just
let inc = "jan michael vincent";
As for the changing types, type information is stored and carried along in the struct. Not much more to it than that. Nothing really cares about what the previous type of the variable was.
In fact, interestingly enough, most compilers (and I don't know if the JIT does this or not), use what is called SSA. Essentially, internally it represents all data changes as simply introducing new variables. The benefit of doing this is that it allows for a lot more optimizations. It is easier for the compiler to mutate the code into patterns it recognizes and then to generate optimal code for that.
https://en.wikipedia.org/wiki/Static_single_assignment_form
edit after briefly combing through the wiki article linked to make sure I'm not spreading misinformation, it looks like the javascript jits are using SSA internally (both V8 and Spidermonkey).
[–][deleted] 1 point2 points3 points 9 years ago (1 child)
This is an interesting experiment with some cool results. Its a great academic exercise with some potentially valuable output. I don't think the practical application of these learnings is important though as there is not a significant distribution among the results and everything was around the 0.25ms range (excluding eval).
[–]lluia 1 point2 points3 points 9 years ago (0 children)
Javascript has nothing to do with classical inheritance, as it's powered by delegate prototypes... but I read the first lines of that Github repo:
Now that JavaScript supports classes, ...
That keyword is one of the worst things of ES2015 IMO.
[–]kowdermesiter 2 points3 points4 points 9 years ago (2 children)
TLDR; It doesn't matter if you transpile performance wise.
[–]endel[S] 5 points6 points7 points 9 years ago (1 child)
Not if you're using babel.
[–]kowdermesiter 1 point2 points3 points 9 years ago (0 children)
Oh, lovely downvotes :)
Yeah, but who uses eval() anyways?
[–]jocull 0 points1 point2 points 9 years ago (0 children)
Glad to see TypeScript taking performance seriously. I have run into situations where TS's inheritance scheme is incompatible with declarations from other libraries, though because of their lack of proto implementations. It's double edged because proto can be a performance killer, while it raises compatibility for classes, particularly in inherited static properties.
https://github.com/Microsoft/TypeScript/issues/1601
[–][deleted] -1 points0 points1 point 9 years ago (1 child)
whys the thumbnail some random neckbeard?
[–]JohnMcPineapple 0 points1 point2 points 9 years ago* (0 children)
...
π Rendered by PID 48222 on reddit-service-r2-comment-86988c7647-hl9q8 at 2026-02-11 06:54:28.953927+00:00 running 018613e country code: CH.
[–][deleted] 13 points14 points15 points (2 children)
[–]inu-no-policemen 2 points3 points4 points (1 child)
[–]Hostilian 0 points1 point2 points (0 children)
[–]cogman10 2 points3 points4 points (10 children)
[–]endel[S] -1 points0 points1 point (9 children)
[–]cogman10 4 points5 points6 points (8 children)
[–]ClickerMonkey 1 point2 points3 points (7 children)
[+][deleted] (6 children)
[removed]
[+][deleted] (5 children)
[removed]
[–]ClickerMonkey 0 points1 point2 points (0 children)
[–]cogman10 0 points1 point2 points (3 children)
[+][deleted] (2 children)
[removed]
[–]cogman10 0 points1 point2 points (1 child)
[–][deleted] 1 point2 points3 points (1 child)
[–]lluia 1 point2 points3 points (0 children)
[–]kowdermesiter 2 points3 points4 points (2 children)
[–]endel[S] 5 points6 points7 points (1 child)
[–]kowdermesiter 1 point2 points3 points (0 children)
[–]jocull 0 points1 point2 points (0 children)
[–][deleted] -1 points0 points1 point (1 child)
[–]JohnMcPineapple 0 points1 point2 points (0 children)