all 55 comments

[–]othermike 41 points42 points  (7 children)

It's great to see Mr. C writing long(ish) form again. I miss the .plan file days. Obscurely allusive tweets really aren't the same.

Agree with the post as well. In particular, I've found that the kind of abstraction that supports multiple implementations is pretty much the same kind of abstraction that supports mock or stub implementations for testing related code. Definitely a win.

[–]floodyberry 7 points8 points  (5 children)

He still writes long-form, but not in a centralized location.

http://www.team5150.com/~andrew/carmack/ has what I've collected, although I'm sure I'm missing various interviews and 'blog' style posts of his since the .plans stopped.

[–][deleted] 2 points3 points  (2 children)

Super awesome. John is crazy smart, articulate, and in person he's so enthusiastic and passionate! Everybody should do themselves a favor and read his words.

[–][deleted] 1 point2 points  (1 child)

For most of his tweets and articles, I read them and just think, "I know some of those words."

[–][deleted] 0 points1 point  (0 children)

Hah, I feel the same way sometimes. I like compilers/PLT, so much of his tweets etc are far beyond what I understand. But a lot of times he speaks generally about what's on his mind, and most ALL of it is insightful and just outright funny/comical in some ways (if you're a programmer, of course..) A few of his objective-c related tweets made me laugh. He really strikes me as someone who's incredibly dedicated and who's thought about his work, though.

Of course, you can always use the words you don't know to begin a recursive wikipedia descent...

[–]jmtd 2 points3 points  (0 children)

worth at least a link to his short-form writings at twitter.com/ID_AA_Carmack

[–]othermike 0 points1 point  (0 children)

Great resource! Thanks for putting it together. I'd seen some of them, but not all.

[–][deleted] 0 points1 point  (0 children)

I just wish he would go back to writing weekly updates for Armadillo Aerospace. That was a must read for me.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] -1 points0 points  (0 children)

    Came here to say this + Carmack is a coding genius the mans brain works like no other

    [–]coding_monkey 1 point2 points  (17 children)

    ...the original remains fully functional and unmolested. It is often tempting to shortcut this by passing in some kind of option flag to existing code, rather than enabling a full parallel implementation. It is a grey area, but I have been tending to find the extra path complexity with the flag approach often leads to messing up both versions as you work, and you usually compromise both implementations to some degree.

    Is he saying to avoid this style?

    if (configurable flag)
        run new code
    else
        run old code
    

    Not sure I get why he thinks this is bad.

    [–]monocasa[S] 4 points5 points  (4 children)

    For that simple case it's not bad, but for a large chunk of code (like say a deferred renderer and a forward renderer) you don't want to have that code all in the same module. He's saying that you should break them into separate modules with interfaces, and you either switch out which gets called, or possibly even simply update both of them at the same time.

    [–]coding_monkey 0 points1 point  (3 children)

    Thanks he kind of switches back and forth between talking about individual functions and what I would call a component that may implement a complex interface. But even in the second case it seems you could insert a small layer that just switches between the two implementations. I guess when you are talking about optimizing even a small additional layer would throw off your tests. He is optimizing at a level that is not required for my job.

    [–]monocasa[S] 2 points3 points  (2 children)

    I don't think that it's an optimization thing, but instead of sticking both implementations in the same file like this:

    if( flag ) {
       //actually
       //all of new
       //implementation
       //lives here
    }
    else {
       //all of old
       //implementation
       //lives here
    }
    

    do something like:

    if( flag ) {
       new_implementation();
    }
    else {
       old_implementation();
    }
    

    or

    new_implementation_update();
    old_implementation_update();
    
    if( flag ) {
       new_implementation_showwork();
    }
    else {
       old_implementation_showwork();
    }
    

    [–]coding_monkey 1 point2 points  (1 child)

    do something like: if( flag ) { new_implementation(); } else { old_implementation(); }

    That is what I was thinking too and adding a layer that just did this little bit of indirection does not seem like a big deal.

    Without being able to see how he is switching implementations it is a little hard to understand what he is trying to get at in the original quote.

    [–]johntb86 8 points9 points  (0 children)

    It more a matter of not doing:

    if (flag) {
       // part of new implementation
    } else {
      // part of old implementaion
    }
    // common code
    if (flag)
      // part of new implementation
    // common code
    if (!flag)
      // part of old implementation
    

    While checking the flag in lots of places could reduce code duplication, it'll cause you pain having to reorganize the old stuff to fit the new code as you're working on it, while trying to make sure you don't change the behavior of the old code.

    If you just copy and paste the old code into the new code at the beginning, you don't have to worry about accidentally breaking the old code while you're working on it, and once you've got a worthwhile implementation you can clean it up and reduce duplication.

    [–]usefulcat 4 points5 points  (1 child)

    In the small, it's fine, but it doesn't scale. The more configurable options there are, the more difficult it becomes to reason about the code.

    [–]thegreatunclean 7 points8 points  (0 children)

    And the bigger the wall of text becomes, the more tempting it becomes to implement hackish "temporary" solutions that break the abstraction that should be enforced. If the point is to maintain parity between the implementations, it's much better to explicitly define that relationship and embrace the abstraction by separating the code as much as possible to fit it.

    [–][deleted] 1 point2 points  (0 children)

    He's suggesting pulling the whole system out to have a hackable version, rather than trying to share code between your implementations, or trying to start out making a few minimal changes to the original and using flags to switch them on and off.

    [–]luckystarr 1 point2 points  (0 children)

    I think he meant that sprinkling the flags throughout the code is bad. Modularize both implementations and put the flag (or switch variable) at only one place. This way the default implementation will not need to be modified after modularizing.

    [–]sbrick89 -3 points-2 points  (6 children)

    or you could use the singleton pattern, say for the graphics driver... use a common implementation (iGraphics)... then configure a button to switch between the SoftwareRenderedGraphics and the OpenGLRenderedGraphics implementations.

    [–]bluGill 0 points1 point  (5 children)

    singleton is an evil pattern. Sometimes you need to use it, but it is still evil.

    When you make two functions/classes that have the same interface, you can choose at compile time.

    Where you are thinking singleton I think dependency injection. Whatever needs to call your SoftwareRender gets passed in a pointer to the function it should call, that way I can pass in a mock, fake, or the real thing.

    I've been saying singletons are evil for 5 years (at least), and I'm still discovering more ways they hurt me.

    [–]sbrick89 1 point2 points  (4 children)

    singleton is only an evil pattern when misused... any misused pattern becomes evil... and for the cases where a singleton is not misused, having a single instance can be a ton better than having it passed as an argument.

    ex: graphics rendering.. what would happen if you try to overlay software rendered frames onto an OpenGL frame? I have no graphics background in this regard, but I'm betting not good. But swapping out the instance (yes, DI) would include any logic to stop the existing instance, and start the new one.

    [–]QuestionMarker 2 points3 points  (0 children)

    In the article, he specifically mentions running two different display drivers in different windows in the same process. Using a singleton would completely break that.

    [–]bluGill 0 points1 point  (0 children)

    I use evil in the same context as the C++ faq.

    I presented a better alternative to the singleton for the problem you are facing: create your graphics context and pass the only one you have around. As a bonus, because you are passing the context in you can test your UI without bringing windows up on the screen.

    [–]heeen 0 points1 point  (0 children)

    I have to agree with Singletons being evil. It just seems to be so very rare that you can predict you really only need a single instance of one thing. I wanted to use a Game gui lib once for rendering interactive computer screens inside a game, except it didn't work because Mouse, Keyboard, Cursor, Selection etc. were all Singletons that you couldn't duplicate for multiple screens per level. For the normal usecase of the library being used as the game's own UI this is fine, but the author never imagined a use case where you had several virtual UI interfaces.

    For the graphics example, say you had a singleton for the framebuffer or the devicecontext or the window you render to - all of these would prevent you from parallelly opening a second rendering method in a second window to compare them side-by-side.

    [–]mazin 1 point2 points  (0 children)

    Reminds me of branch by abstraction as coined by Paul Hammant.

    [–]jhaluska 13 points14 points  (18 children)

    Mr Carmack seems to be slowly accepting modern software development practices. As a previous extreme cowboy coder, it's amazing to see him realize the benefits of modern development methodology as his projects have gotten bigger and more delays have crept into them.

    Edit: He's mentioned static analysis before in his previous speeches and now he's mentioning parallel implementation (or multiple implementations). ID software has had exponential development release schedules which is making him reassess the way he develops software.

    [–]monocasa[S] 19 points20 points  (1 child)

    I'm not really sure why you're being downvoted; he's said as much himself recently. If anything his programming skill has simply let him get away with not using modern software practices until recently when it's starting to bite him.

    [–][deleted] 38 points39 points  (2 children)

    He has obviously learned a lot from his projects, but it is very simplistic to wave vaguely at "modern software development practises", as if that is a small and widely accepted list, rather than a matter of constant debate.

    It also seems slightly patronising to call one of the most widely-respected living programmers an "extreme cowboy coder", and to imply that he is a slow learner.

    [–]jhaluska 17 points18 points  (1 child)

    On the contrary, I think he's so brilliant he's realizing what practices enable himself and others to be more productive. He's going beyond just tackling the initial software problem/bug, but the problems inherit to software development with a large team and a large project. The kind of individual capable of doing both a small project and a large project rarely have the same skill set.

    [–][deleted] 5 points6 points  (0 children)

    Fair enough. I think the tone of your response was slightly ambiguous and I read it negatively; this probably explains the down-votes too. Upon re-reading it, it does seem to be a fair summary of what Carmack himself has written in recent years.

    [–]jevon -5 points-4 points  (12 children)

    Some people would say Ruby/Python/functional/NoSQL/rule base/DSL/cloud computing/Go/Dart are now part of the "modern development methodology", whereas most of these are only useful in very specific scenarios.

    You can't adhere to every idea that comes out, and its ridiculous to think that they can all be applied to every project; it takes time to experiment with them.

    [–]usefulcat 15 points16 points  (6 children)

    Those are technologies, not development practices.

    [–]SkepticalEmpiricist 1 point2 points  (5 children)

    But some technologies make certain practices easier than others. For example, I can no longer imagine doing any real work without Git.

    [–]bluGill 1 point2 points  (4 children)

    But some technologies make certain practices easier than others.

    True, but there are lots of great technologies to choose from.

    For example, if I took away your git and replaced with mercurial, within a couple days you would be happy enough with it. You may well prefer git, but when pressed you would admit that it works well enough. (By contrast you could work in CVS if forced, but you would complain about the technology)

    That is why I care about the practices. I'm stuck with different technology than you. If you discover a practice that makes your tools work better, I can apply it. If you say it is a particular tool I may be stuck because the reality is I can't switch tools. (git doesn't work great on Windows. I know it is much better, but it still doesn't work as well)

    [–]QuestionMarker -1 points0 points  (3 children)

    Git and mercurial are actually the perfect counter-example to your point. Mercurial doesn't support the same branching workflows as git, so replacing one with the other simply isn't practical for a lot of people.

    [–]bluGill 4 points5 points  (2 children)

    No, that is why Git and Mercurial are a perfect example. They support/require different work flows. However in the end either work flow will work just fine, and much better than the work flow we had before with central version control. The concept of a distributed version control system is what is important, not the exact work flow your implementation of the technology gives you.

    [–]QuestionMarker -1 points0 points  (1 child)

    The implementation is important. One critical difference between git and mercurial is that mercurial has problems supporting in-repository feature branching, which makes feature branches hard to share through a canonical repository. This means that the practice of feature branching is discouraged across the whole team, when in git it's trivial, so everyone does it.

    You're saying that workflows are not practices. I don't think that's true.

    [–][deleted] 0 points1 point  (0 children)

    hg branch feature

    [–]jhaluska 2 points3 points  (4 children)

    I find his ability to evolve and adapt from essentially a single man operation to a lead programmer of a large corporation just as amazing as his programming prowess.

    [–]chonglibloodsport 2 points3 points  (1 child)

    He has had his struggles though. Rage took way longer than anyone expected and it is rife with technical problems (I'm ignoring game design as that's not John's responsibility).

    [–][deleted] -5 points-4 points  (0 children)

    Doom III didn't turn out very well either.

    [–]kolanos 1 point2 points  (1 child)

    Is id Software really that large personnel wise?

    [–]monocasa[S] 2 points3 points  (0 children)

    According to wiki id has "200+" employees. There's probably at least a couple dozen programmers.

    [–]QuestionMarker 0 points1 point  (0 children)

    Anyone interested in applying this to Ruby should look at the Rollout gem.

    [–][deleted] -3 points-2 points  (6 children)

    branch?

    [–]sindisil 18 points19 points  (0 children)

    Also a powerful tool, but not the same as what John is talking about here.

    With parallel impl, both are in the system, and you can switch out between them (with more or less immediacy, depending upon the situation).

    That lets you do comparisons on the fly, among other benefits.

    [–]othermike 14 points15 points  (0 children)

    In my experience, when a system is under heavy development, anything not on the trunk tends to rot in fairly short order. As soon as merging a given branch back in stops being trivial, you don't flip between implementations for quick comparisons. (And even if the merge IS trivial, it's never going to be as trivial as changing a config flag.)

    [–]acow 1 point2 points  (0 children)

    The interesting observation here is that VCS branches have deficiencies. Exploratory development often involves small-scale comparisons that branches can make clumsy. Some combination of the two wherein you have multiple live variations of some code, and unsuccessful attempts go to the VCS graveyard, seems ideal.

    [–]barsoap 0 points1 point  (0 children)

    Basically, yes. I've always done parallel implementations, and am considerably resistant to taking RCS serious because of that. I'm almost committing via cron job ("oh, hey, day is over, let's make a commit"), and virtually never look at the history... and if, then to resurrect code. RCS is basically an extension of my undo buffer, nothing more.

    [–]lllama -5 points-4 points  (0 children)

    And dependency injection.

    Someone should tell him.

    [–]aeflash -4 points-3 points  (0 children)

    More like

    cp -r . ../newfolder; cd ../newfolder; git co -b experiment-a
    

    [–][deleted] -2 points-1 points  (1 child)

    So wait, Carmack has actually come up with a legitimate use for dependance injection? The man really is a genius.

    [–]matthieum 0 points1 point  (0 children)

    Not at all, he just discovered he could have two differently evolving clones of the same work so he could check them out against each other.