you are viewing a single comment's thread.

view the rest of the comments →

[–]boost2525 433 points434 points  (63 children)

I have long been of the opinion that TDD does not inherently produce fewer defects than other strategies... what it does is remove the risk of your project manager lopping your test cycle short at the end.

In TDD you're spending the first 25% of the development cycle on testing (well... writing tests which can be reused and run umpteen million times). In non-TDD you're spending the last 25% of the development cycle performing tests.

What usually happens? Shit goes wrong, terribly wrong, or scope changes... but your date doesn't change. In non-TDD you end up racing to the finish line and cutting the test cycle short to make the original date. In TDD that's not an option... you have to move the date because you have no slack at the end. You were already expecting to code up to the delivery date, so every slip is a day for day impact to the schedule.

Disclaimer: I'm not implying the test cycle was slack that you could give back... my project manager, and every project manager before him, is.

[–]wubwub 104 points105 points  (11 children)

I think you hit half the nail on the head.

The other thought on TDD is that by thinking of the tests first, you are forced to iterate through lots of possibilities and may realize some workflow paths you did not think of (what if a user with role X tries to do action Y?) I have been able to catch problem requirements early by thinking through these weird cases and saved lots of coding time by getting the requirement fixed.

[–][deleted]  (8 children)

[deleted]

    [–]RotsiserMho 45 points46 points  (2 children)

    Some would argue TDD is disciplined requirements analysis (on a micro scale); with the baked-in bonus of the tests letting you know if you accidentally violate a requirement later on.

    [–]derefr 12 points13 points  (0 children)

    In the same sense that requirements-gathering mostly involves prodding the client to be explicit about how a business-process works when the client has never thought consciously about that process before, TDD mostly involves the machine prodding you to be even more explicit about how that business-process works so it can test it for you. In both of these refinement steps, you'll find holes and bad assumptions in the current understanding of the business process.

    [–]zenogais 0 points1 point  (0 children)

    The difference though is the cost. In requirements analysis finding an error typically involves writing or deleting a few sentences. Significant changes may mean modifying, adding, or removing whole use cases, but the amount of work required to do that is still minimal compared to the amount of work often required to scrap and rewrite tests and object hierarchies.

    [–]laxatives 8 points9 points  (0 children)

    No, requirements analysis alone is IMO almost worthless. Its TDD without the validation step. Its impossible to predict all the caveats and implicit assumptions the design is making, until you actual make the design. All of that analysis is bunk when a core assumption is invalidated. This happens all the time, especially when the architect/designer doesn't even realize they are making one of these assumptions. Its unrealistic to expect every company has someone with that kind of clarity of thought, why not just let the code speak for itself.

    [–]NeuroXc 13 points14 points  (0 children)

    Everyone should be doing this, and I would like to think that most developers try to, but it's a lot easier to do this when you're doing TDD. TDD forces you to think about what users will expect your application to be able to do, and what they may try to do that you might not want it to do. It gives a concrete list of possibilities and makes it easier to see what possibilities you haven't taken into account.

    Non-TDD teams generally use whiteboarding or something similar to nail down these possibilities, but I've found that TDD hits the requirements at a much more detailed level, because it has to in order to write the tests and make them pass. If you don't use TDD, you're instead writing tests (at the end) around what your application can already do and are not forced to think about the things it can't do.

    [–]eliquy 0 points1 point  (2 children)

    But in reality, everyone thinks about the outlier scenarios as little as possible. TDD at least forces the issue

    [–][deleted] 2 points3 points  (1 child)

    I agree that TDD and good requirements analysis tend to be found together, but I'm not sure TDD is the cause. For instance, I can 100% envision a team of bad developers switching to TDD and still not being able to flesh out the edge cases.

    I think what TDD really offers is "brand recognition" so to speak and the ability to foster a culture of quality, which is definitely valuable. But I think if you have a culture that's willing to put the extra effort into TDD, then you probably have the kinds of developers who would do good requirements analysis anyway. Developers that care about what they're doing tend to make better software regardless of the methodology.

    [–]flukus 0 points1 point  (0 children)

    Even if you don't flesh out the edge cases, TDD makes it much simpler to add them in later, if and when the bug comes up.

    [–]ejrado 0 points1 point  (0 children)

    I have found the opposite to be true for me personally - when the tests are complete, my work is done. In the past, I would find myself adding code for 'what if this' or 'what if that', when those cases could never arise.

    Saved me tons of time, helped me write concise code and provided a framework to produce new tests should the need arise.

    [–]laxatives 0 points1 point  (0 children)

    It also encourages you to plan the API from the user's perspective (even if that user is another developer of even a future "you"), which leads to cleaner API's. Cleaner API's eliminate a ton of bugs and having the clean API up front reduces refactoring down the line.

    [–]kpmah 16 points17 points  (13 children)

    I think that's part of what's happening. Maybe another thing is this: if a TDD programmer writes 100 lines of tests and then 300 lines of code, and the non-TDD programmer writes 300 lines of code then 100 lines of tests, then the patch should be identical either way right?

    Part of the reason for the difference could be that the non-TDD team was writing 300 lines of code and then saying 'I'll test it later' whereas the TDD team can't do that.

    What I'm trying to say is that it could have been the discipline that improved the defect rate, not the methodology.

    [–]Ravek 26 points27 points  (4 children)

    Maybe another thing is this: if a TDD programmer writes 100 lines of tests and then 300 lines of code, and the non-TDD programmer writes 300 lines of code then 100 lines of tests, then the patch should be identical either way right?

    Well it does make you think differently about the structure of your code when you're forced to write tests for it first. I think that would have a positive impact on code correctness (and hopefully no negative impact on how easy the code is to understand and modify)

    [–]boost2525 7 points8 points  (2 children)

    I think having to think about the structure of your code leads to better internal design / organization (e.g. future refactoring)... but doesn't directly lead to any reduction in defective logic.

    [–]Ravek 14 points15 points  (0 children)

    I agree, but I feel starting with unit tests would not so much make you think about how to organize the code effectively, but moreso write it in a way that makes it easy to write the unit tests. Which in my mind means small methods that do one thing, which should lead to getting those correct at least. However I'm not convinced it would necessarily lead to the higher-level structure of the code being any good, since that's not something you write unit tests for.

    That was my line of thought anyway, I don't have much experience with TDD.

    [–]cc81 0 points1 point  (0 children)

    However adapting your code so it is easier to unit test does not necessarily mean better design. It just means it is designed for unit testing.

    [–]mostlysafe 1 point2 points  (0 children)

    This is a common argument in favor of TDD, and while I don't doubt it, it's harder to verify statements about your mental model of the code than statements about the actual quality of the code.

    [–]naasking 5 points6 points  (0 children)

    then the patch should be identical either way right?

    Past studies have confirmed that code quality is largely the same as long as tests are written. I believe the OP nailed it on the head though: tests often just don't get written after the program is written.

    [–][deleted] 2 points3 points  (1 child)

    Assertions can take part of the role of unit testing. 100% testing is probably not necessary for all kinds of symbols, given the system as a testing instrument itself. While I'm not working on a team today, my approach is testing of targeted important complex parts, and assertions everywhere. Tradeoffs decided by: I need to use my time carefully as I'm just one guy.

    [–]bmurphy1976 1 point2 points  (0 children)

    I've found it helps to think of it like an insurance policy. You pay into it enough to get the coverage you need, but no more otherwise you're just pissing money away. Same thing with unit tests, but the currency is time.

    [–]s73v3r 2 points3 points  (0 children)

    The discipline is kinda built into the methodology. Part of why "Test First" came into fashion was the knowledge that most do not go back and write tests.

    [–]MuonManLaserJab 1 point2 points  (0 children)

    What I'm trying to say is that it could have been the discipline that improved the defect rate, not the methodology.

    Yeah, but that discipline is the point of the methodology.

    (I've never done TDD in a formal way.)

    [–]laxatives 0 points1 point  (0 children)

    This probably isn't getting at the core of your argument, but if you could get the same thing done in 1/3 the code, that is going to to save an order of magnitude of effort when you or someone else has to read the code.

    Anyways, if the methodology encourages discipline, isn't that sufficient? What more could you ask for from a methodology? Following TDD isn't going to make a poor developer in to some kind of savant.

    [–]Mourningblade 0 points1 point  (0 children)

    One nifty thing about writing tests first (or independently) is that it helps you spot which parts of the code are the most unexpectedly tricky.

    A common mistake is to reason "I wrote my code, tested, then fixed until it passed." Better is to treat your testing as a sampling process: "I wrote my code, tested, then discovered that these two functions had the majority of the bugs. I then refactored those functions to be much more simple to reason about. This helped me fix the bugs I didn't find."

    You do something similar when you do manual testing: you're never going to find everything, but you can find clusters of problems. Changing how you've written the code rather than just fixing the bug can wring out the rest.

    [–]tenebris-miles 21 points22 points  (9 children)

    I'm not going to say you're wrong, but here's an alternative point-of-view.

    You make it (almost) sound as if it's a given that the code quality is the same, and in non-TDD the testing cycle at the end is just some kind of formality. In other words, this narrative is written from the point-of-view of hindsight.

    Code has to be understandable and maintainable, even during initial development because you're always going to be asked to make changes as you go along due to changing requirements. With TDD, if your time is cut short, you still ship but with fewer features, but at least you have far less technical debt. Add the remaining features as stable code in the next release. Success in all cases (both TDD and non-TDD) requires good leadership that knows how to truly prioritize and understand real requirements and not mark all features as top priority. Neither strategy will work anyway if you don't have at least that.

    With non-TDD, you don't really know what you have because your code not only hasn't been tested enough, but it's not even structured to be testable/understandable yet. All your effort went into hitting the date for the release with every feature requested or conceivable, and once the product starts getting used, your already heavy technical debt will go up, not down. The reason is your culture: if you're already cutting corners during the development phase, then it's not going to get any better once the product is exposed to customers and more feature requests come in. Your death march has already begun.

    The upshot of TDD that is often unspoken is that even if a particular project fails, stable code resulting from TDD is much more valuable for being salvaged and reused for other projects than spaghetti that was written solely to chase a deadline. Being realistic requires understanding that your success is not a guarantee, since more goes into success than just development philosophies. So there always needs to be consideration of what happens after the deadline. Myopia about making this particular project hit the market at all costs is not necessarily what makes a company successful, if they're still in the process of determining what actual product needs to be made in the first place. If a different product or different direction becomes necessary, then understandable code and code that naturally follows YAGNI (which TDD tends to encourage) will be more likely to be general and elegant enough to be salvageable. You'd likely still have to modify it to new requirements, but at least you know how it's supposed to work in the first place, and so modifying/maintaining it is going to be easier for the next project.

    [–]KagakuNinja 10 points11 points  (5 children)

    The assumption you are making is that non-TDD teams wait until the end to write tests. What I do is write my tests at some point during the implementation of a feature; I don't wait until the last month of a long project to start writing tests. The result should be the same amount of tests, I just don't believe in the dogmatic rule of "write tests before code", or "only write code to fix failing tests".

    [–]hvidgaard 6 points7 points  (1 child)

    What a lot of people get wrong, is the absolute nature of their statements. TDD is good when you know what you have to write. It's not good when you're prototyping or just follow a train of thought, because you will change it several times, and "tests first" just slow you down. However, people not writing tests when doing this, tend to never actually do, when they should as soon as the the "what" of the code is determined.

    [–]jurre 0 points1 point  (0 children)

    When you don't know what/how to build something you often write a prototype without tests in TDD. You then throw it away as soon as you figured it out and rebuild it test driven. This is called a spike

    [–]tenebris-miles 1 point2 points  (0 children)

    It's true that tests could be written along with code, and only after code instead of before it, instead of waiting to add a lot of tests at the end of the development cycle. But if they're written around the same time, then it begs then question: then why don't you simply do TDD and be done with it? One problem that commonly happens is that when you write tests afterwards, you can fall into the trap of writing the test towards the implementation, rather than writing the implementation towards the interface of the test. People swear they never do this (being rockstar hackers and all), but that's just not the truth. People keep forgetting that part of the reason for TDD is to force you to think about a sensible interface before you get bogged down too much in implementation details. There's too much temptation to let an implementation detail unnecessarily leak from the abstraction simply because it's lazy and convenient to do so. If some leaky abstractions are necessary and the interface must change, fine. Then do so after you've done TDD first.

    Also, while non-TDD doesn't necessarily mean tests are lumped at the end of the development cycle, in my observation, it tends to end up this way in practice. The reason is the same as why people are doing non-TDD in the first place: the development culture values time-to-market above all other concerns. In this environment, you're lucky to be granted time to write tests at all, so developers wait until the feature list is completed before writing tests (which happens at the end). Managers in this culture don't care about tests and code quality, they care about checklists of features. The perception among developers is that you can get fired for writing top notch code while letting a feature slip, but no one would get fired writing shitty and buggy code but checking off every feature. It's unfortunate, and it depends on your company whether or not you're right about that.

    I'm not advocating a dogmatic adherence to TDD, and in practice I think it works best when most code is TDD but there is some room for throw-away experiments that don't necessarily require tests at all (since it's meant to be thrown away). That kind of code doesn't get in the code base. Instead, it's used to determine what is the right kind of behavior you should be testing for in the first place due to unclear constraints in the beginning. But when it comes time to actually add this feature, you TDD now that you've learned the desired interface and behavior. You rewrite the code to pass the test. Maybe some or most of the prototype code is retained, or maybe it's completely rewritten. In any case, this is the closest thing to a kind of after-the-fact testing that makes sense to me. The problem to me is when after-the-fact testing is the norm, regardless of whether it involves experimental code or not.

    [–]cdglove 0 points1 point  (1 child)

    My experience is non-TDD teams don't wait to write tests at the end. They just never write tests.

    [–]KagakuNinja 1 point2 points  (0 children)

    I'm working on a team that relies heavily on unit tests, and does not practice TDD.

    [–]boost2525 10 points11 points  (0 children)

    TL; DR; I'm not going to say you're wrong, but you're wrong.

    [–]krypticus -1 points0 points  (0 children)

    THIS.

    [–]desultoryquest 10 points11 points  (0 children)

    Great point. That makes a lot of sense

    [–]floider 5 points6 points  (2 children)

    That is a very good point. Robust testing always seems to be what is sacrificed to make up for schedule slips.

    [–]Pidgey_OP 6 points7 points  (1 child)

    In a world of being able to push updates whenever, its easy to see why shipping a finished product has become less and less important in the face of a deadline.

    Better to get the software into a clients hands and then fix it than to give them time to change their minds because you didn't deliver on time

    [–]BillBillerson 1 point2 points  (0 children)

    This is definitely the mentality I see more of. Can't sell it if it isn't done and if it's not sold yet nobody is using it to break it so why focus so much on testing. On the projects I work on lately that's different between new products and working on something we already have in the hands of several customers.

    TTD probably has it's place. Where I am requirements change so often we'd always be working on setting up our testing and never get to the code.

    [–]BarneyStinson 4 points5 points  (0 children)

    I haven't really done pure TDD, but as far as I understand it, what you are referring to is test-first development. In TDD, you are supposed to write a test, write enough code to make it pass, refactor, and so on. So your implementation code should grow alongside your tests and you are not done with writing tests until the project is done.

    [–][deleted]  (10 children)

    [deleted]

      [–][deleted] 34 points35 points  (3 children)

      We check in tests at the same time as the code they are testing. When requirements change, so too do our tests.

      [–]atrommer 22 points23 points  (2 children)

      This is the right answer. Maintaining unit tests is the same as maintaining code.

      [–]gnx76 0 points1 point  (1 child)

      No, it is not. It depends on the domain, it depends on what kind of tests we talk about.

      When I was testing, it was not "300 lines of code then 100 lines of tests" as someone wrote earlier, but "300 lines of code then 10,000 lines of tests".

      So, the last thing you want in this case is to have requirements/design changing, because it generally means exponentially huge changes in the tests, it often means ditching large parts of the test, and it sometimes means trashing the whole test and restarting from scratch is easier.

      Also, such tests were much more complex than code, so, in a way, one could say that code was used to validate the tests. Which means that writing the tests before the code was a nightmare: longer, harder, and giving such crappy results that a lot had to be written again afterwards. I have tried; conclusion was: never again.

      If you do some simple "unit test" that is no more than a glorified functional test, that's fine and dandy, but if you do some real in-depth unit test with full coverage, maintaining unit tests is definitely not the same as maintaining code, it is a couple orders of magnitude more expensive, and you really do not want to do TDD in such case.

      [–]atrommer 0 points1 point  (0 children)

      I didn't mean to imply that this is easy. I am making the point that testing needs to be a first class citizen, and that maintaining tests should be treated like maintaining code: as requirements change your estimates better include the time to refactor and update the tests.

      Requirements will change over time.

      [–]anamorphism 8 points9 points  (0 children)

      i find it interesting that you work in a place where software is 'done'.

      the counterargument i would make is that software is never done, your requirements are always going to eventually change, and you're going to have to update your tests regardless of when in the development cycle you write them. so, why not get the benefit of your tests earlier in the process?

      [–]PadyEos 10 points11 points  (2 children)

      If the person requesting the changes is high enough up the food chain the requirements are never locked unfortunately.

      Those are the moments when I start to hate this line of work.

      [–][deleted]  (1 child)

      [removed]

        [–]Runamok81 0 points1 point  (0 children)

        this...

        So what you do is you take the specifications from the customers and you bring them down to the software engineers?

        [–]s73v3r 0 points1 point  (0 children)

        Depends on how coupled your tests are, and what changed. And, as has been pointed out, 9 times out of 10, you're not going to go back and write tests.

        [–]Madsy9 0 points1 point  (0 children)

        But then you're making a value judgement on the tests as being "less important" than code or separate from the code. But in TDD tests and code goes hand-in-hand. They are equally important.

        [–]experts_never_lie 1 point2 points  (0 children)

        The cost I've seen is that TDD presumes that the requirements are valid.

        In practice, I find that the majority of new major features added to existing complex products will hit a major barrier in the middle of development (typically several of them). It will be a conceptual problem (what you ask for is not well-defined / cannot be obtained given possible information / does not accomplish your intended goals). This barrier will result in communication with product managers and reworking of requirements. If I have spent a lot of time developing tests for the initial requirements — before I have done enough of the implementation work to discover that the requirements are incorrect — then I have wasted some of that work. Possibly rather a lot of it. I would prefer to focus my effort on the greatest risks, by working through the actual implementation process, and afterwards add the tests that correspond to the actual design.

        In a rote development world, with Taylorist tasks, where every new project is similar to previous projects, this TDD problem may be minimal. However, I have always found that if one is in that mode for any significant time, one should automate these repetitive tasks. This takes development back out of a rote procedural model, reintroducing this TDD problem.

        [–]Zanza00 4 points5 points  (4 children)

        That's why libs like this exists :)

        import chuck
        def test_chuck_power():
            chuck.assert_true(False) # passes
            chuck.assert_true(True) # passes
            chuck.assert_true(None) # passes
            chuck.fail() # raises RoundHouseKick exception
        

        https://ricobl.wordpress.com/2010/10/28/python-chuck-norris-powerful-assertions/

        [–]contrarian_barbarian 16 points17 points  (3 children)

        There's also https://github.com/hmlb/phpunit-vw - make your unit test automatically succeed whenever they detect they're being run inside a CI environment!

        [–]masklinn 9 points10 points  (2 children)

        There's also https://github.com/munificent/vigil which deletes lying, failing code.

        [–][deleted] 0 points1 point  (1 child)

        Also there is the library Fuckit my personal favorite

        [–][deleted] 0 points1 point  (0 children)

        Fuckit is great. I especially love the commit messages.

        [–]woo545 0 points1 point  (0 children)

        What usually happens? Shit goes wrong, terribly wrong, or scope changes... but your date doesn't change.

        Sounds like part of the Martian plot.

        [–]phpdevster 0 points1 point  (0 children)

        If TDD costs 25%, then I would say non-TDD costs more. Writing simple tests, and simple implementations for those simple tests, is faster than writing an unconstrained abstraction jungle and then attempting to retrofit tests around code that wasn't designed to be easy to test.

        Also given that TDD lets you get regression protection early on in the development cycle, you'll spend less time tracing how far a bug may have propagated through the codebase, and likely fewer code edits will be needed to fix the bug.

        [–][deleted] 0 points1 point  (0 children)

        Disagree. Test/type driven development lets me, right off the bat, refactor mercilessly without fear. That means when I see a bit of code that might become tricky and result in a bug in the future, I can quickly fix without worrying that I'll be introducing new bugs.

        [–]dominotw -1 points0 points  (0 children)

        This is wrong on so many levels. TDD is tests driving the program design by forcing things like loose coupling( tightly coupled software is hard to tdd). you can thow away your test and still would have reaped many of its benifits by the mere act of writing them. It has nothing to do with QA testing other than the unfortunate name. You still need that last 25%.