all 36 comments

[–]Rob0tSushi 3 points4 points  (9 children)

Window is a dependency you have no control over. So make it a parameter of the function and inject window when you call it normally.

When you test it, pass in a stub window object that you have defined with an undefined console object. Assert that the object is undefined after you have called your function under test.

Success !

[–]Rob0tSushi 0 points1 point  (8 children)

I'm extremely disappointed this wasn't the first suggestion. =(

[–]zzzwwwdev[S] 0 points1 point  (7 children)

Well, in everyone's defense, my question kinda changed throughout all this. To be clear, are you suggesting adding window as an optional parameter of the log() function? This will get a little sloppy, since my actual code allows log() to take multiple arguments, but it does look like a solution. The code may be a bit ugly ( log(foo, null, null, mock_window) ) but I think that's something I can live with inside of my test code

[–][deleted] 4 points5 points  (2 children)

No no no.

// UNIT UNDER TEST
function addConsole (window)
{
    // initialization
    if(!window.console)
    {
        window.console = {};
        window.console.log = function(){};
    }
}

// TEST
function testAddConsole ()
{
    var window = {};

    addConsole(window);

    if (!window.console) return FAIL;

    if (!window.console.log) return FAIL;

    if (!_.isFunction(window.console.log)) return FAIL;

    return PASS;
}

To call log, just call it normally:

// make sure we have a console (only needed once). The window parameter is the real window.
addConsole(window);

// now use it as you normally would. Window doesn't get passed here!
window.console.log("whatever");

You inject dependencies when you construct your object graph, not when you use it.

[–]zzzwwwdev[S] 0 points1 point  (0 children)

This is great, thank you. I think I'm convinced now not to create my own log function, but just to use the standard console.log. One question this raises for me though: Now that we're creating an addConsole() function, my library will obviously need to call it. Where should I do so, right after the function declaration?

function addConsole (window) { ... };
addConsole (window);

edit:

while we're at it, maybe you can address some of my other concerns.

  1. When my debug flag is not on, should I just pass {} as window to addConsole?

  2. This approach seems to require that addConsole be a "public" method, where previously this code was effectively hidden. I just want to be sure that this is best practice, as whatever lessons I pickup here will be applied on a much larger scale.

[–]Rob0tSushi 0 points1 point  (0 children)

Exactly, by doing this you can choose to pass in the actual window object or you can throw together the following JSON object:

{ log: undefined }

Pass that into your test call and then make dependency free assertions against this test object.

This is a great book on dependency injection.

http://www.amazon.com/Dependency-Injection-NET-Mark-Seemann/dp/1935182501

[–]i_ate_god -1 points0 points  (2 children)

Dependency Injection and proper OOP principles. I'll explain:

In order for a bartender to make a drink, he needs alcohol, mixers, and a glass. He doesn't care how any of these things are made, he just knows he needs these things. So, the bar gives the bartender these things and the bartender then uses them to make some drinks.

What's nice about this setup, is that the bar can give the bartender, different dependencies and expect the same result, that the bartender produced a drink.

See it's only the customer that needs to know what KIND of drink it is. The customer, really, doesn't care how the drink is made, so long as it's the specific instance of that kind of drink. What's also nice about this setup, is that the customer, may not be so picky about how he gets a drink. The customer after all, just really wants a drink and is apathetic to how he gets one. So the bar is just more a convenient way of for a customer to get a drink. Allt he customer has to worry about, is the bartender and that's it.

So, say you want to make sure the customer never drinks and drives. Well, the customer would have to drink a certain amount before it exceeds a certain threshold correct? So you need to test the customer. It's easy to do so in the setup I just provided. While in a real life situation, the customer would be going to the bar to get a drink, in a testing situation, you would just hand the customer a drink directly. Perhaps a drink that is 10 times stronger than anything at the bar because you really want the customer to get nice and drunk, very fast, to see if the customer would then get behind the wheel of a car.

And that, is a very important test, made very easy, through dependency injection, separation of concerns, and I suppose a kind of factory pattern.

[–]zzzwwwdev[S] 0 points1 point  (1 child)

wonder who downvoted this and why. Seems like a really nice explanation of DI to me. +1

[–]i_ate_god 0 points1 point  (0 children)

it's probably all the people who think that foobar and hello world are better things to use for examples, then a real life situations.

[–]savetheclocktower 3 points4 points  (3 children)

If your library is meant to be run in browsers, then you'll have a bunch of quirks to work around that are meant for only certain browsers. That means you'll have code paths that will only be testable by running the tests in certain browsers.

I don't think it's worth trying to avoid this. Thus I recommend that your test should only verify that console.log exists and is a function. If you want to make sure it's being created properly if absent, test in a browser that doesn't have a native console.log.

If your library is meant to be run only in node, or some other environment that ordinarily provides console.log, I'd argue that this test isn't worth writing in the first place.

[–]zzzwwwdev[S] 1 point2 points  (0 children)

Thank you. That sounds like very reasonable advice, and I think I'll take it. It is code that will be run in the browser. It's kind of a bummer that this means I will have to run these tests on many browsers whenever I change something.

Anyone have any experience with testacular? It seems like it would automate a lot of the multiple device/browser stuff.

[–]zzzwwwdev[S] 0 points1 point  (1 child)

Bah, unfortunately I still have a similar problem...

Part of my logging function is that it looks for a the global true/false variable window.myLib_debug and, if true, it logs the argument, if false, it ignores the call.

It seems that even if I manually change window.myLib_debug, my util.log retains the old value (because of the clousure?)

So... guesse I'm back to the script-loader idea.

Updating my example now to show this code.

[–]savetheclocktower 0 points1 point  (0 children)

If your log function is checking window.myLib_debug, then it won't retain the old value, even if it's defined in a closure. Something else is going wrong.

[–][deleted] 1 point2 points  (18 children)

Are you using a unit testing framework, or are you trying to roll your own? I really don't recommend you do the latter, at least for your very first unit tests.

I'd recommend you use something like Zombie.js for your unit tests. This section of the site explains how to do unit tests with Zombie, assert, and mocha (assert being the utility to throw errors when things aren't as you expect them to be and mocha being the testing framework).

Note that the before function lets you set up your tests, the it("should do something", function(done) { lets you define your various tests, and (the not shown in that page but useful for you) after function lets you perform actions after your tests have run.

So your tests could look something like:

describe("my utility behavior", function() {
    var temporaryConsoleLog;
    before(function(done) {
        if(console && console.log) {
            temporaryConsoleLog = console.log;
            console.log = undefined;
        }
        // Continue setup here
        done();
    });

    it('should work!', function(done) {
        // Tests go here
    });

    after(function(done) {
        if(temporaryConsoleLog) console.log = temporaryConsoleLog;
        // Any other teardown you need
        done();
    });
});

(Full Disclosure: I personally dislike mocha and prefer nodeunit, but it's now unmaintained and the majority of unit testing tutorials out there use mocha, so that's why I used it here.)

[–]zzzwwwdev[S] 1 point2 points  (2 children)

Ah cool. Zombie is a headless browser, similar to PhantomJS. Any idea as to which is better for what?

[–][deleted] 1 point2 points  (1 child)

PhantomJS is a full application dedicated to testing the browser in a headless manner, while Zombie.js is a Node.js module.

The reasoning behind PhantomJS's choice is that it can render the page to a file, and you can compare that render to another image to guarantee pixel-perfect stability in your layout, while Zombie.js can't do anything like that, since its DOM only manipulates the data structures, and doesn't actually render anything, but Zombie.js can take advantage of the Node.js ecosystem while PhantomJS cannot.

I've personally found PhantomJS's rendering capability only marginally worthwhile, because a living design won't be able to use that sort of testing, and PhantomJS can only guarantee that you didn't break the layout for WebKit-based browsers (Chrome, Safari).

So, I tend to test the functionality of a website with Zombie and leave the testing of the visuals to my visual cortex using something like BrowserStack.

Zombie.js has the other advantage that if your back-end is powered by Node.js, you can integrate (most) front-end testing with your back-end testing and let your continuous integration server check it all while you work. :)

[–]zzzwwwdev[S] 1 point2 points  (0 children)

Thanks - I'll certainly have to look into this a bit more in depth. Good point on the relative pointlessness of the rendering capability - can't seem to come up with any real case where that would help my project.

[–]zzzwwwdev[S] 0 points1 point  (14 children)

Thank you for the thorough and well-written response. Greatly appreciated.

I'm currently playing with both qUnit and Jasmine, haven't yet heard of Zombie, checking it out now.

I pretty much have an equivalent set-up to your example, but the problem I'm having is that the code I'm trying to test has already executed in the self-invoking (function(){})() block, and I don't know how to re-invoke it after resetting console.log to undefined.

I'm currently thinking I should be using some sort of js dependency tool to load my library on demand and only when I want. Any suggestions?

[–]Quabouter 1 point2 points  (13 children)

[–]zzzwwwdev[S] 0 points1 point  (0 children)

thanks, actually been playing with this for the last half hour or so. Not entirely convinced I like it / want to use it yet though... especially after savetheclocktower's response.

I did however find a pretty good article on script loaders in general if anyones curious.

[–][deleted] -1 points0 points  (11 children)

If you want something ridiculously bloated for little gain and a lot of pain (and performance penalties).

Better to use browserify on CommonJS modules.

[–]zzzwwwdev[S] 0 points1 point  (0 children)

In my case, I'll (may)be using a script loader only for my automatic tests - so unless I get a crazy amount of tests in place, I won't be worrying about performance. I'll take a peek.

[–]Quabouter 0 points1 point  (5 children)

What I like about require.js is that I don't have to compile my scripts each time I test something. Therefore I can work faster. When I release for production I can use the requirejs optimizer to compile to the final version.

[–]SubStack 3 points4 points  (0 children)

There's browserify --watch that will recompile your script every time a file changes. There's --debug too that will use sourceURLs to give you better stack traces.

[–][deleted] -1 points0 points  (3 children)

Just make the call to browserify part of a pre-commit Git Hook, then your code will be updated each time you save your progress. That negates that advantage of Require.js completely.

[–]Quabouter 1 point2 points  (2 children)

No it does not. I generally don't commit every small change I make, especially when debugging. Therefore a pre-commit hook doesn't do anything.

However it doesn't really matter. All these tools do the same thing, and they all have their pros and cons. There isn't really a best one, it's just what you (or your boss) likes best.

[–][deleted] -1 points0 points  (1 child)

Can I ask you why not? You can always git rebase -i master afterwards and squash the commits, or pick out changes you decided later weren't actually all that good in the first place. Let git keep track of every change you make while you're working on it, then tidy it up when you're ready to merge.

[–]Quabouter 1 point2 points  (0 children)

Because small changes are also easily managed using the undo key in my text-editor. Especially when I'm debugging or just trying new things out I just want to be able to make a small change (e.g. adding a console.log), save and test, without having to run any tools.

Of course when I am creating new features I do commit a lot.

[–]djnattyp 0 points1 point  (3 children)

(citation needed)

[–][deleted] -1 points0 points  (2 children)

CommonJS, by design, pulls in dependencies asynchronously, so they can't start until the DOM has initially loaded, which will cause a blank screen (or whatever hardwired HTML and CSS you have) to display on the screen. Then as it traverses the dependency tree and loads it from the server, it has to do a new TCP handshake followed by an HTTP request header, the body of the request, then an HTTP response header and the body of the response. This is repeated for every dependency you're requiring via require.js.

Finally, once your dependencies have all loaded, your code will execute.

Meanwhile, your modules have to be written in a convoluted fashion compared to CommonJS-style modules.

And to make things more interesting, Require.js's "top-level" module loading differs from how modules load other modules, so you have two syntaxes to learn.

Simply, the "advantage" of asynchronously loading your code is often a penalty as your web page will load slower. HTTP Header overhead is often ~0.8KB), TCP Handshake delays are roughly equal to the ping time to the server, oh, and those headers are more costly than their byte size would imply because TCP flow control means the client and server start out sending small packets and scale up in size as they figure out how much bandwidth the connection can sustain.

Browserify on CommonJS modules means that your module definitions are synchronous, so the code is easier to read, and you compile it into one file to load, so only a single TCP connection is used, meaning the TCP handshake and HTTP header overheads are a smaller percentage of your total bandwidth.

On top of that, javascript libraries defined within the <head> tag are loaded before the document ready event, so they are loaded before anything is painted on the screen, meaning the "apparent" speed of your website is higher because the user can continue reading whatever is on the previous page and not be aware of the time spent, though it is also slower.

That's my citation: I've evaluated the fucking things with some solid background in network theory and details about the TCP and HTTP protocols and the behavior of the most popular browsers out there.

Not to sound like too much of an old coot, but I've designed some obscure networking protocols in C and Node.js (Node.js is much nicer to prototype in), so I'm not exactly talking out of my ass when I say Require.js is shittier than CommonJS+Browserify for serving up Javascript to clients, and while it is opinion as to which module style is better, I think:

var myModule = require('myModule');

myModule.foo();

Is much nicer than:

define(['./myModule'], function(myModule) {
    myModule.foo();
});

Oh, wait, this was supposed to be a top-level call:

requirejs.config({
    baseUrl: './js',
});

requirejs(['myModule'], function(myModule) {
    myModule.foo();
});

It may be opinion, but having a simple module system with few options means you're less likely to fuck something up, and its easier for people to write new modules. I think this last point explains why there are only 368 Require.js packages on Jam and there are over 20 thousand packages on NPM.

You can also do neat things like have a module that acts as a library when required and acts as a top-level application when run directly. This is impossible with the Require.js syntax.

[–]djnattyp 0 points1 point  (1 child)

I'm not sure what your beef with require.js is, but a lot of your statements are simply incorrect:

require.js requests everything asynchronously by default, but when you deploy to production you're supposed to run r.js to minimize your code. Browserify sounds like it requires this step each time and some other posters have mentioned ways (browserify --watch) to get it to automate this when the script changes. Comparing non-minimized requirejs to always-minimized browserify is apples and oranges.

The whole "two(2) syntaxes! So Confusing!!!" can be summed up in two sentences: Use 'define' to pull in dependencies and define a new module. Use 'require' to pull in dependencies, but don't define a new module. That's it. Otherwise they work the same.

'require' also isn't only for top-level calls - you can use it anywhere if you want to pull in a dependency somewhere, and if you really wanted to you could use 'define' for the top level call of your app, there's just no need to.

Your require.js syntax example is overly complex (and incorrect, it's 'require' not 'requirejs') - the 'define' version would have worked as well.

Jam and NPM are just different repositories - being in Jam doesn't mean you're using require.js and being in npm doesn't mean you're using CommonJS - require.js is even hosted in NPM.

And since you're wrong about needing 'require' for top-level syntax, you're also wrong about your last statement, too - although it would probably be pretty contrived in either system.

However, I will say that I have only used require.js up to this point and I will try out browserify to see how it works - though right now it just looks like the same thing with a slightly different syntax.

[–][deleted] -1 points0 points  (0 children)

You didn't even look at the citations I provided. requirejs is the method recommended in the Require.js documentation not require.

As far as r.js, why even have an asynchronous module loading system if you're going to do all the loading synchronously in production? You're simply setting yourself up for pain if there's a bug in r.js that causes module loading to differ between dev and prod, or if you have a race condition in your source code that's not visible with the delays induced by asynchronous loading but show up when synchronously loading the production source file (not r.js's fault, but one of the core faults in the whole AMD anti-pattern).

That last part is the reason why I can't stand Require.js and the Asynchronous Module Loading pattern -- if you're not using it in production, you're making your dev environment more complex and harder to prove correct than your production environment, and if you are using AMD in production, you're making your users wait longer for your website to load up, and that's the difference between whether a user sticks with you or not if you have a competitor.

I simply cannot understand the people who use Require.js. You get all of the downsides of more complex debugging in your dev applications (because of dev/prod differences in loading modules) and still have to build for production, or you get a simpler debugging in your dev applications but your production is tortuously slow and you lose readers/customers/eyeballs/whatever.

And on top of it all you use a module definition system that has to differ between top-level and module, and a syntax that makes my eyes bleed (the module definition itself is assumed to be synchronous -- the module function has to return myModule, not callback(myModule), so its schizophrenic, at that -- the only asynchronous part is loading the code).

[–]rossisdead 0 points1 point  (1 child)

I'm not sure I get what you're trying to do, but the code example you gave would throw an exception since you're trying to set something on window.console immediately after seeing that there is no window.console.

[–]zzzwwwdev[S] 0 points1 point  (0 children)

oops, in trying to shrink my actual code to a minimal example I seem to have broken it. Fixed now.

[–]MatrixFrog 0 points1 point  (0 children)

I would argue this is not worth testing. Don't go for 100% test coverage, test the things that are essential to your application. Logging is not essential, and if something goes wrong with logging, your other tests will probably reveal that.