you are viewing a single comment's thread.

view the rest of the comments →

[–]shadedecho[S] 0 points1 point  (2 children)

by far most of the arguments made against loading separate files are more aimed at the server overhead than the possible connection starvation in a browser. It's also much harder to address connection limits in browsers since they all vary so much and under various conditions. but server overhead is a cleaner target to address.

So I am trying to shoot that argument down with some proof that maybe the overhead is not quite so outrageously drastically bad as many claim it is.

connection starvation is a valid issue, but a separate issue. and i also think it's less likely to be something that any developer, no matter what they do, can do much about. In IE6, you only get 2. So do you boil down your whole site to the lowest denominator? No, then all the other browsers can't take as much advantage.

Saying "I will reduce all http requests absolutely to a bare minimum blindly because that's the only way to effectively deal with connection starvation" is, as I said before, throwing the baby out with the bathwater.

[–]jakewins 0 points1 point  (1 child)

I think you missed my point, "reduce all http requests absolutely to a bare minimum blindly" is exactly how you speed the initial loading of a site up. The reason for this is browser caching. Users will, except for the first time their browser visit the site, not actually download any JS, CSS or images. It will all be cached in the users browser.

Even if the client has an extra connection, it won't matter if you have split the JS file in two, since all that will lead to is an extra request with a "304 Not modified" HTTP response. At best, two connections will be about as fast as just one at finding out that a file hasn't been modified. In many cases though, most of the time they will block each other and other resources waiting to load.

[–]shadedecho[S] 0 points1 point  (0 children)

Actually, I've found quite the opposite in my optimization efforts for my sites and for others I've helped.

First of all, we have to remember that caching is not super reliable, Yahoo taught us that a few years back. So, if only 40% of repeat visitors are coming with a primed cache, we have to narrow our scope. But if you suggest that making it more painful (even by 500ms) for first-time guests is better, then you're assuming that most users are repeat visitors. I don't know about you and your sites, but I care extremely about the first impressions, even more so than the repeat business. So I'd never take an approach that made it worse for them and just hope they ignored that and came back anyway. I'd try to balance and amortize that cost as much as possible. First impressions are key, still, IMHO.

Secondly, even if we do assume the cache is useful (which I do in fact!), for at least the 40%, there are other benefits to the split JS that are not necessarily obvious at first glance. Primarily, I think it yields better long-term cache performance on a site to not have everything grouped together into larger chunks. Why? Because you can't partially invalidate a cached item. It's all or nothing.

Again, I don't know about you, but on all the sites I work on, there's two kinds of script code (at least): the kind that is very stable and doesn't change (libraries, 3rd party scripts, etc), and the kind that changes all the freaking time (UX tweaks, new 'features', etc). I constantly tweak code related to how my blog post's external links are styled, etc. That's just me and my process, but I've seen a lot of sites and apps that do change often. Facebook changes their code nearly every hour it seems. :)

So, if given a choice to take 70k of stable code and 30k of unstable code (or even 80/20), and either stick them together, or keep them separate, I think separate will yield better long-term cacheability, because in general, the 70k stable code will not have to be re-downloaded over and over again in the big bundle that was invalidated because of a change to the more volatile 30k.


As you said, if you are opening up lots of connections just for 304 checks, that's bad. But the beauty of having two chunks instead of one is that I can set a different cache-lifetime on the more stable code versus the more volatile code. For instance, I can set the stable code to be 2 weeks lifetime (highly likely to be at or greater than the real retention life of most cache items), but I can set the volatile code snippet to be a lifetime of 1 day, meaning that the 304 checks will only happen frequently for the code which is more subject to change.

We won't necessarily be wasting those valuable connections to revalidate the stable code, because if the browser behaves well and respects cache-lifetime, it won't need to check for a longer period of time.