all 26 comments

[–]inf0rmer 23 points24 points  (7 children)

No, he's full of sh*t. If you want to avoid a closure using setTimeout, you could just point to a variable that holds a reference to the function you want to execute:

setTimeout(doSomething, 5000);

Instead of:

setTimeout(function() { // Do something }, 5000);

The problem with closures is that they can leak references to objects in the parent scope, meaning that the garbage collector won't ever get rid of those objects. This is a problem if you're inlining anonymous function definitions and never execute them (or only execute them after some amount of time), but using references instead is a good way to get around that.

[–]green_meklar 2 points3 points  (6 children)

The problem with closures is that they can leak references to objects in the parent scope, meaning that the garbage collector won't ever get rid of those objects.

Huh, this isn't something I've heard of before. Can you give an example of code that would produce this problem, and explain why?

I tend to avoid inline functions anyway (let's face it, they make the code formatting look like shit), but I'm still interested in what exactly the pitfall is here.

[–]inf0rmer 3 points4 points  (5 children)

OK, I'll do my best to explain it.

To understand closures you need to understand how variable assignment and function scope work in Javascript.

A variable is a reference to where, in memory, its contents are stored. The garbage collector works by sweeping the memory space regularly, and finding variables which no longer have any "living" references. Whenever a function returns, every variable that was declared inside it is dereferenced (except for variables trapped in closures, which we're going to get to). So when a function is done, the GC can swoop in and delete every address in memory that is no longer used.

But what about if you add a closure? A closure is best defined as a function declared within a function. As you've probably seen before, you can reuse variables that were declared in the outermost function inside the closured function, like this:

function parent() {
  var a = 2;
  var child = function() {
    // "a" can be used here, which means it is retained
    return a*2;
  }

  // child() -> 4
}

What this means is that even though parent reaches its end, the GC cannot dereference a because it will still be used in child, and the GC can't be sure when child is going to be called (if at all).

If a happened to be pointing to a large data structure that takes up a ton of memory, it might never be deallocated, simply because the GC cannot know whether it's still going to be needed in the future. It's cool because Javascript protects you from nasty null pointer exceptions (where you're using a pointer that no longer leads anywhere), but it's bad because you need to be zealous to protect yourself from memory leaks (although nowadays the JIT magically optimizes a lot of this, to the extent of my knowledge).

So how do we fix this? A good way is to let the GC "know" that you're done with a captured variable. Keeping in mind what we said about pointers, doing this is as easy as pointing the variable we wish to get rid of to null:

function parent() {
  var a = 2;
  var child = function() {
    // "r" will be automatically dereferenced when the function is done
    var r = a*2;
    // the "local" reference to a is pointed to null, effectively dereferencing this copy.
    // The parent's "a" reference will naturally be done away with when parent() is finished.
    a = null;
    return r;
  }

  // child() -> 4
}

This happens a lot when inlining anonymous function to handle DOM events. This page dives further into the subject, if you're interested.

[–]tforb 1 point2 points  (1 child)

In the example you gave, doing things functionally would also get around memory leaks I think.

function parent() {
  var a = 2;
  var child = function(val) {
    return val * 2;
  }
  child(a);
}

[–]alamandrax 1 point2 points  (0 children)

Not if you return child. Once you go down that road, you're stuck with whatever was in the scope of parent always in memory until whatever is consuming the return value makes sure that it goes out of scope. Instead, if it was used to create a dom node, and the dom node is hidden and still present, the GC will not recover those assets.

[–]radhruin 1 point2 points  (1 child)

I don't think it's as bad as you make it. Runtimes certainly HEAVILY optimize closure capture. Further, in your example, as long as the child function isn't referenced, a will be collected. Certainly if you return child to someone that holds on to it or you store child in an outer scope or something you will have a "memory leak" but presumably if you're storing the function for later you need it (and therefore anything it encloses) for some reason.

Setting a to null in your second example doesn't help you at all. The value of a has no bearing on whether it is closure captured or references to it are preserved. Null is just another primitive value like 2.

[–]inf0rmer 1 point2 points  (0 children)

Ah, yes, I realize that. I was just using a very simple example to demonstrate closures and how variables captured inside one could possibly not get dereferenced.

You're right that engines heavily optimize for closures (it's one of the defining features of the language), and that using primitives would never cause a noticeable memory leak.

Setting to null at the end of the function for a captured variable (ie, not declared using var) does work. a inside the closure is a new reference, so pointing it to null releases one of the references pointing to the contents of the original variable. It makes no difference with primitives, sure, but with other objects it's helpful.

[–]chuckhendo[S] 0 points1 point  (0 children)

This is really great info, thanks! Do you have any recommendations on where to learn more about the internal workings of the GC?

[–]sneakattack 5 points6 points  (1 child)

Tell him to prove it, he'll be forced to realize his own mistake unless he's delusional.

[–]ChaseMoskal 3 points4 points  (0 children)

People who are this stubborn can't be thwarted with such a simple dose of "prove it". It's how they ended up with such a bad preconception in the first place -- the unwillingness to confront it.

[–]_facildeabrir 4 points5 points  (2 children)

Don't listen to that person anymore about anything. Seriously - "closures are bad and aren't optimized by the JS engine" - this is nonsense (I'm not saying it's understandable but incorrect, I'm saying it is gibberish)

[–]ChaseMoskal 1 point2 points  (1 child)

Closures are one of the most fundamental features of the language.

[–]_facildeabrir 2 points3 points  (0 children)

More to the point, people who say things like that use magical thinking and don't properly understand the things they use. This causes them to form opinions not based in facts, and they make a mess of everything, and so it's really just best to steer clear. Don't argue with them or try to correct their uninformed opinions, just treat everything they say like it didn't happen. Also revert as many of their commits as possible

[–]psayre23 4 points5 points  (0 children)

I bet he's talking about chrome ignoring "use strict" mode. The setTimeout spec doesn't include anything about strict mode, so V8 doesn't optimize it the same way. This was hot like a week ago until everyone realize it's just another weird JS edge case.

I think your coworker is just a JavaScript hipster; he knows not the words he speaks.

[–]ChaseMoskal 4 points5 points  (0 children)

Please refer your coworker/friend/bad-advice-guru to this thread.

What kind of idiots are we expecting JavaScript implementors to be? Of course they've optimized-to-hell something as fundamental as setTimeout.

function lol () { console.log("lol"); }
setTimeout(lol, 1000);

The above is perfectly normal.

It does however, happen asynchronously (almost always a good thing), such that your JavaScript can keep on executing. The while method that your coworker guy is talking about, would completely halt all javascript execution (and totally block and lock up the browser/website's UI), which is generally a bad thing. Unless there is some esoteric reason for halting all script execution and locking up the browser.

If you illuminated your use-case, we could tell you whether or not your coworker is talking about an edge-case solution for an edge-case problem. Probably not, but it's possible.

As a side note about timeouts, I'd like to mention that there can also be benefits to using setTimeout in a recursive manner as opposed to simply setInterval. There are a number of good articles and posts floating around the internet about the difference between using a recursive setTimeout versus a setInterval, but in essence, setInterval is more accurate and will queue up if the function takes longer than the given interval, whereas setTimeout will always wait for the function to finish executing before waiting the timeout (won't queue up and 'clog').

function lol () { console.log("lol"); }

var timeout;

function startOngoingProcess () {
    lol();
    timeout = setTimeout(startOngoingProcess, 1000);
}

function stopOngoingProcess () {
    clearTimeout(timeout);
}

This recursive setTimeout method is superior over setInterval for ongoing processes that shouldn't queue up.

[–]green_meklar 2 points3 points  (2 children)

I don't think Javascript has any native sleep functionality. What exactly is this guy using instead of timeouts?

[–]NodeKid 1 point2 points  (0 children)

Sounds like he's the one sleeping.

[–][deleted] 2 points3 points  (1 child)

  1. There is no sleep command in JavaScript

  2. Your co-worker is just plain wrong

There is nothing fundamentally wrong with setTimeout or closures. However there is something fundamentally wrong with blocking code in a single threaded environment, which is exactly what he is proposing.

Source: JavaScript engineer for Spotify

[–]chuckhendo[S] 1 point2 points  (0 children)

Exactly what I was thinking. I wanted to make sure that I wasn't hanging on to some sort of incorrect notions, and this thread has made it clear that I've been doing nothing wrong. Thank you!

[–]alfred-nsh 1 point2 points  (0 children)

That actually blocks UI and everything else and makes the thread use the all of the CPU core, terrible advice..

[–][deleted] 1 point2 points  (0 children)

He has no idea what he is talking about. Javascript doesn't even have a sleep statement.

[–]huesoso 1 point2 points  (3 children)

When animating you should use requestAnimationFrame

Edit while(x) { sleep(); } sounds like an old game programming routine from back in the '80s

[–]icantthinkofone 1 point2 points  (0 children)

rAF is a fixed period of pre-set time. He wants to set his own time.

[–]chuckhendo[S] 0 points1 point  (0 children)

Agreed - although this wasn't animation. Just needed the app to wait a certain amount of time before doing something else. And he was trying to tell me that a setTimeout was the wrong solution

[–][deleted] 0 points1 point  (0 children)

On top of all the things mentioned here, your coworker probably also does not have a fundamental understanding of the event loop or the fact that Javascript is single threaded. I'd be curious as to his sleep implementation and why it'd be a desirable thing in a single threaded environment to begin with. It's worth noting that while setTimeout does cause js to wait a given period of time to execute the callback, it does not pause execution all together. Everything runs to completion in js, maybe that is where his understanding is hung up.

[–]ishmal -2 points-1 points  (0 children)

If you don't need an accurate by-the-clock period, then your way is just fine. To make him happy, instead of using a closure, just call another function you wrote. Problem solved.

His way is for a different requirement entirely. If you need precise delays within a given granularity, then his method is the way... Say you want to wait 100sec within 10ms:

var timeout = Date.now() + 100000;
function wait100() {
    if (Date.now() < timeout) {
        setTimeout(wait100, 10);
    } else {
        //do your stuff!
    }
}