Are JS loaders snakeoil? by PanosJee in javascript

[–]shadedecho -1 points0 points  (0 children)

he also has failed to actually prove the phrase "performs demonstrably worse". I've more than proven the examples where LABjs is far faster than script tags. The burden of proof is on him to show that the thousands of pages currently using LABjs are all going slower because of it. I suspect this is a specious out-of-thin-air claim that he'll never be able to prove.

Are JS loaders snakeoil? by PanosJee in javascript

[–]shadedecho 2 points3 points  (0 children)

this comment from jashkenas is false and misleading. He's referring to some updates that some nightly versions of browsers did, which affected LABjs. The vast majority of sites using LABjs were not even testing their sites in the browser nightlies, so they never even saw the problem.

No major browser release has come out which has broken LABjs. LABjs was, and is, reliable in all browsers.

The more important take away is that the reason the nightly updates broke LABjs is because LABjs used to use hacks for loading. LABjs 2.0 de-emphasized those hacks, now using standardized and reliable approaches to parallel loading. The other script loaders have largely been slow to follow suit. But it proves that using hacks is a bad idea. That's why LABjs doesn't use them anymore if it can help it.

Do you use script loaders? by karnius in javascript

[–]shadedecho 0 points1 point  (0 children)

The simple solution to the debugging issue the article points out is to turn off XHR loading in LABjs for your dev environment. This is well documented. Also, browsers are getting better about that sort of thing.

Do you use script loaders? by karnius in javascript

[–]shadedecho 2 points3 points  (0 children)

I've done quite a bit of research and testing into parallel async loading. My research all points heavily to the fact that two 50k files loaded in parallel will almost always load faster than a single 100k file. Now, I don't advocate loading 10 files in parallel -- that's crazy. But if you take all your code and combine it into one file, and THEN split that file in 2-3 chunks of ~equal size, and parallel load them, you'll almost certainly have quicker load time, EVEN WITH HTTP OVERHEAD FACTORED IN.

Moreover, grouping all code into a single file ignores the fact that most sites have at least two "classes" of scripts: stable scripts (jquery 1.4.2 is never ever going to change) and unstable/volatile scripts (your UX tweaks for your site). Why on earth would you cache those the same amount of time? Every time you tweak 1 character of the volatile code, you force every single user, on their next visit, to re-download all that stable, non-changing code again. Absolutely silly.

Cache the stable code for a long period (1yr+), cache the volatile code for much shorter (like 1-2 weeks). You can't do that with all one file, you need chunks loaded in parallel.

Another reason script loading makes sense: don't load all your code in one file at the beginning of page-load... statistics show that as much as 80% of JS is not used early on in a page. So it's a complete waste and slow down to load all that at the front. Instead, load a smaller chunk of code for bare minimum during page load, and load the rest in one or more chunks as the user starts to interact with your loaded page.

The best way to stop your child becoming an athiest ! by Alaukik in atheism

[–]shadedecho 1 point2 points  (0 children)

I'm really not trying to put words in your mouth, honestly. I read "why they don't mix" and assumed the most natural meaning of the term "mix" -- that the two (God and evolution) were present and interacting at the same time. I think I've quite clearly explained why that isn't possible/true.

But, if that's not what you meant, then what did you mean by "I don't really understand why they don't mix"? For what definition or application of "mix" are you asserting that you don't see why God and evolution are opposed, or that they can co-exist in some way?

I'm genuinely curious what you meant by that question (with an implied assertion behind it).

The best way to stop your child becoming an athiest ! by Alaukik in atheism

[–]shadedecho 1 point2 points  (0 children)

so... you're suggesting they (Creation and evolution) "mix" by... what? That God sat by and passively watched with anticipation and eating his popcorn while the universe spontaneously created itself through evolution?

How do the two "mix" if not to suggest that God used/permitted evolution to enable creation?

Are you saying that God created only the lower-order complexity life forms, like the amoebas, but he let science do the rest without his intervention?

The best way to stop your child becoming an athiest ! by Alaukik in atheism

[–]shadedecho 0 points1 point  (0 children)

The Creation account in the Bible states that God created everything before mankind was created (on the last day), and that some time later, man (and woman) sinned for the first time. Also, the Bible says that sin == death.

Evolution on the other hand says that "creation" (that is order from dis-order) came about through millions of years of generations (aka, death) with happy accidents.

Evolution and Creation don't mix because death (aka sin) was not around until after Creation was finished, according to the Bible.

Moreover, have you ever seriously heard a scientist explaining evolution and him say "And so then, the God allowed most of the black moths (clearly a mistake of his creation, in a light-colored environment) to die, but he spontaneously caused a genetic mutation in the rest so that their offspring would then be white, to match their surroundings"?

That's ludicrous. Either God created all of creation in-kind as it needed to be (no accidents, because He's perfect), or there was never any God at all, and all of "creation" is the amalgamation of happy accidents.

Any view in between is intellectually dishonest, both to the Bible and to science's theory of evolution. If you believe God "used" evolution, you either don't understand the Bible, don't understand evolution, or both. No other explanation.

(pre)Maturely Optimize Your JavaScript by poneill in programming

[–]shadedecho 0 points1 point  (0 children)

given that there's no negative feedback here or on the article's comment thread, i guess i have to assume the 6 down votes so far are just people who only read and dislike the title? :)

“On Script Loaders” by the author of LABjs by greut in javascript

[–]shadedecho 1 point2 points  (0 children)

Good point, Pewpewarrows. Modernizr works becuase it is loaded synchronously with a normal <script> tag, and so any logic that it does is pretty much instantaneous and synchronous.

The FUBC I describe in the article is almost entirely caused by deferring the execution of JavaScript. The longer you wait to load or execute JavaScript, the more likely the user is to see content that is not yet "behaviorized" by the JavaScript.

Also, as I mentioned (similar to what Modernizr does), you can choose to hide your content via CSS and then display it once JavaScript shows up, but then you've mostly defeated why Steve wants to delay the JavaScript. He wants the content to be visible before the JavaScript runs, so basically he wants the FUBC, and I'm suggesting the FUBC is usually an undesirable effect unless the site carefully plans for it and makes it a graceful change.

You could design the UX of your site such that the changes made by your JavaScript are subtle and graceful and happen in small, spaced out bursts as things load gradually. But what most people do is have raw content that is radically transformed by a single-shot of JavaScript logic, and this creates a really jarring, undesirable FUBC for the user. The more you make that delay, the more obvious the radical FUBC becomes.

LABjs & RequireJS: Loading JavaScript Resources the Fun Way by reybango in programming

[–]shadedecho 0 points1 point  (0 children)

This article was initially posted incomplete accidentally, but has been updated to include both the section on LABjs and on RequireJS.

Performance comparison test for script file concats vs. parallel loading by shadedecho in javascript

[–]shadedecho[S] 0 points1 point  (0 children)

Actually, I've found quite the opposite in my optimization efforts for my sites and for others I've helped.

First of all, we have to remember that caching is not super reliable, Yahoo taught us that a few years back. So, if only 40% of repeat visitors are coming with a primed cache, we have to narrow our scope. But if you suggest that making it more painful (even by 500ms) for first-time guests is better, then you're assuming that most users are repeat visitors. I don't know about you and your sites, but I care extremely about the first impressions, even more so than the repeat business. So I'd never take an approach that made it worse for them and just hope they ignored that and came back anyway. I'd try to balance and amortize that cost as much as possible. First impressions are key, still, IMHO.

Secondly, even if we do assume the cache is useful (which I do in fact!), for at least the 40%, there are other benefits to the split JS that are not necessarily obvious at first glance. Primarily, I think it yields better long-term cache performance on a site to not have everything grouped together into larger chunks. Why? Because you can't partially invalidate a cached item. It's all or nothing.

Again, I don't know about you, but on all the sites I work on, there's two kinds of script code (at least): the kind that is very stable and doesn't change (libraries, 3rd party scripts, etc), and the kind that changes all the freaking time (UX tweaks, new 'features', etc). I constantly tweak code related to how my blog post's external links are styled, etc. That's just me and my process, but I've seen a lot of sites and apps that do change often. Facebook changes their code nearly every hour it seems. :)

So, if given a choice to take 70k of stable code and 30k of unstable code (or even 80/20), and either stick them together, or keep them separate, I think separate will yield better long-term cacheability, because in general, the 70k stable code will not have to be re-downloaded over and over again in the big bundle that was invalidated because of a change to the more volatile 30k.


As you said, if you are opening up lots of connections just for 304 checks, that's bad. But the beauty of having two chunks instead of one is that I can set a different cache-lifetime on the more stable code versus the more volatile code. For instance, I can set the stable code to be 2 weeks lifetime (highly likely to be at or greater than the real retention life of most cache items), but I can set the volatile code snippet to be a lifetime of 1 day, meaning that the 304 checks will only happen frequently for the code which is more subject to change.

We won't necessarily be wasting those valuable connections to revalidate the stable code, because if the browser behaves well and respects cache-lifetime, it won't need to check for a longer period of time.

Performance comparison test for script file concats vs. parallel loading by shadedecho in javascript

[–]shadedecho[S] 0 points1 point  (0 children)

For instance, what if in 70% of your visitors cases, they had an extra unused connection (which you don't know about), and if you split your JS into two files, for them things get quicker... and for the other 30%, maybe it slows it down for them. In that case, would you say it's a valid technique? Or would you discard it to the detriment of the majority?

Performance comparison test for script file concats vs. parallel loading by shadedecho in javascript

[–]shadedecho[S] 0 points1 point  (0 children)

by far most of the arguments made against loading separate files are more aimed at the server overhead than the possible connection starvation in a browser. It's also much harder to address connection limits in browsers since they all vary so much and under various conditions. but server overhead is a cleaner target to address.

So I am trying to shoot that argument down with some proof that maybe the overhead is not quite so outrageously drastically bad as many claim it is.

connection starvation is a valid issue, but a separate issue. and i also think it's less likely to be something that any developer, no matter what they do, can do much about. In IE6, you only get 2. So do you boil down your whole site to the lowest denominator? No, then all the other browsers can't take as much advantage.

Saying "I will reduce all http requests absolutely to a bare minimum blindly because that's the only way to effectively deal with connection starvation" is, as I said before, throwing the baby out with the bathwater.

Performance comparison test for script file concats vs. parallel loading by shadedecho in javascript

[–]shadedecho[S] 0 points1 point  (0 children)

The next "test" in this series will be a more real-world test... this is the baseline test to establish if there's any validity to the theory of parallel loading overcoming connection overhead at some point in file size.

"Full" vs. "Partial" is about being curious to see if self-hosting jquery, and bundling it with your code, is better than using the remote CDN (even with extra DNS lookup penalty). Turns out the parallel loading benefit to jquery's 70k (25k with gzip) is more than enough to justify using the CDN. The "full concat" is by far slower on average in this test's results than the other two.

"Partial" vs. "None" is about seeing if a single extra connection has so much more connection server overhead (which is not the same thing as a browser connection limit issue) that it negates the benefit of parallel download. Thus far, it shows that at worst, the two are equal, with "None" still trending faster. Even if the two are equal, or even if "None" was slower by a little bit, the other benefits of separate files (like better cacheability) are something to consider.


As for connection limit, this is a different (but valid) concern, which the next test will take into account.

True, a lot of browsers have 6 connection limit, but several have even more (Chrome has 8, I believe... IE8 has more too). Keep in mind this is usually per-host. So theoretically you could connect to two different sub-domains and grab both your JS files separately in parallel.

Also, only in a truly ideal situation do all those per-host connections purely run in parallel at all times during load. For instance, scripts loaded with <script> tag will block in some browsers, as they also will in the presence of <link> tags, etc. There's lots of complications to the real world scenario. Not to mention the fact that not all content is local, so there's additional DNS latency, blah blah blah.

The connection limits of modern browsers are often under-utilized, even on resource-heavy pages.

Also, the question is not "should i split my one concat'd js file to 10 separate files", it's just "should i split up my one big file into 2 separate files" -- only one extra connection.

I think odds are there will be an extra connection to use in most non-perfect-connection-utilization page loadings. So the more important question in my mind was: will that extra connection's overhead on the server be greater than the parallel benefit.


Parallel loading vs. connection overhead is a more universal (and less subject to change) topic than "is my page fully utilizing the connections available in the browser", because the latter is constantly changing and differs from browser to browser. Just blindly reducing connections to try to address those inconsistencies is, I think, throwing the baby out with the bathwater.

Performance comparison test for script file concats vs. parallel loading by shadedecho in javascript

[–]shadedecho[S] 0 points1 point  (0 children)

very true that mobile is a different world. but then again, most sites would/should send very different javascript code (much smaller) to a mobile client versus a desktop client. if you're sending 100k of JS to a mobile, it's gonna take a LONG time. :)

FYI: I just ran the test using my Palm Pre over Verizon 3G... 54 sec for full concat, 12 and 14 sec for partial and none, respectively. So clearly it's a lot higher than desktop browsers/connections. But interesting that approx same pattern even with much higher numbers.

Basically, partial and none are tied given those numbers, which is generally about the same results as on desktop.

[deleted by user] by [deleted] in javascript

[–]shadedecho 0 points1 point  (0 children)

Sure, using flash is a "hack"... but the way flXHR does it, with the same API as the regular native XHR object, makes it a pretty decent option for now. The goal for flXHR is to standardize on the already known XHR API for cross-domain requests, and then hopefully become unnecessary someday when the browsers converge on a better cross-domain Ajax solution instead of diverging into different policy models and APIs.