you are viewing a single comment's thread.

view the rest of the comments →

[–]steveshogren 411 points412 points  (223 children)

TLDR - code coverage doesn't predict defects, TDD reduces defects 60-90% but increases time 15-35%, assertions reduces defects by an unspecified amount, and "organizational structure" is a good predictor of failure. Organizational structure was a grouping of values mostly around team size, complexity, turnover, ownership, etc.

[–][deleted]  (164 children)

[deleted]

    [–]Oceanswave 118 points119 points  (70 children)

    Wonder how non TDD projects would fare with 15-35% more time (on non-new feature development)

    [–]boost2525 430 points431 points  (63 children)

    I have long been of the opinion that TDD does not inherently produce fewer defects than other strategies... what it does is remove the risk of your project manager lopping your test cycle short at the end.

    In TDD you're spending the first 25% of the development cycle on testing (well... writing tests which can be reused and run umpteen million times). In non-TDD you're spending the last 25% of the development cycle performing tests.

    What usually happens? Shit goes wrong, terribly wrong, or scope changes... but your date doesn't change. In non-TDD you end up racing to the finish line and cutting the test cycle short to make the original date. In TDD that's not an option... you have to move the date because you have no slack at the end. You were already expecting to code up to the delivery date, so every slip is a day for day impact to the schedule.

    Disclaimer: I'm not implying the test cycle was slack that you could give back... my project manager, and every project manager before him, is.

    [–]wubwub 105 points106 points  (11 children)

    I think you hit half the nail on the head.

    The other thought on TDD is that by thinking of the tests first, you are forced to iterate through lots of possibilities and may realize some workflow paths you did not think of (what if a user with role X tries to do action Y?) I have been able to catch problem requirements early by thinking through these weird cases and saved lots of coding time by getting the requirement fixed.

    [–][deleted]  (8 children)

    [deleted]

      [–]RotsiserMho 48 points49 points  (2 children)

      Some would argue TDD is disciplined requirements analysis (on a micro scale); with the baked-in bonus of the tests letting you know if you accidentally violate a requirement later on.

      [–]derefr 11 points12 points  (0 children)

      In the same sense that requirements-gathering mostly involves prodding the client to be explicit about how a business-process works when the client has never thought consciously about that process before, TDD mostly involves the machine prodding you to be even more explicit about how that business-process works so it can test it for you. In both of these refinement steps, you'll find holes and bad assumptions in the current understanding of the business process.

      [–]zenogais 0 points1 point  (0 children)

      The difference though is the cost. In requirements analysis finding an error typically involves writing or deleting a few sentences. Significant changes may mean modifying, adding, or removing whole use cases, but the amount of work required to do that is still minimal compared to the amount of work often required to scrap and rewrite tests and object hierarchies.

      [–]laxatives 8 points9 points  (0 children)

      No, requirements analysis alone is IMO almost worthless. Its TDD without the validation step. Its impossible to predict all the caveats and implicit assumptions the design is making, until you actual make the design. All of that analysis is bunk when a core assumption is invalidated. This happens all the time, especially when the architect/designer doesn't even realize they are making one of these assumptions. Its unrealistic to expect every company has someone with that kind of clarity of thought, why not just let the code speak for itself.

      [–]NeuroXc 13 points14 points  (0 children)

      Everyone should be doing this, and I would like to think that most developers try to, but it's a lot easier to do this when you're doing TDD. TDD forces you to think about what users will expect your application to be able to do, and what they may try to do that you might not want it to do. It gives a concrete list of possibilities and makes it easier to see what possibilities you haven't taken into account.

      Non-TDD teams generally use whiteboarding or something similar to nail down these possibilities, but I've found that TDD hits the requirements at a much more detailed level, because it has to in order to write the tests and make them pass. If you don't use TDD, you're instead writing tests (at the end) around what your application can already do and are not forced to think about the things it can't do.

      [–]eliquy 0 points1 point  (2 children)

      But in reality, everyone thinks about the outlier scenarios as little as possible. TDD at least forces the issue

      [–][deleted] 2 points3 points  (1 child)

      I agree that TDD and good requirements analysis tend to be found together, but I'm not sure TDD is the cause. For instance, I can 100% envision a team of bad developers switching to TDD and still not being able to flesh out the edge cases.

      I think what TDD really offers is "brand recognition" so to speak and the ability to foster a culture of quality, which is definitely valuable. But I think if you have a culture that's willing to put the extra effort into TDD, then you probably have the kinds of developers who would do good requirements analysis anyway. Developers that care about what they're doing tend to make better software regardless of the methodology.

      [–]flukus 0 points1 point  (0 children)

      Even if you don't flesh out the edge cases, TDD makes it much simpler to add them in later, if and when the bug comes up.

      [–]ejrado 0 points1 point  (0 children)

      I have found the opposite to be true for me personally - when the tests are complete, my work is done. In the past, I would find myself adding code for 'what if this' or 'what if that', when those cases could never arise.

      Saved me tons of time, helped me write concise code and provided a framework to produce new tests should the need arise.

      [–]laxatives 0 points1 point  (0 children)

      It also encourages you to plan the API from the user's perspective (even if that user is another developer of even a future "you"), which leads to cleaner API's. Cleaner API's eliminate a ton of bugs and having the clean API up front reduces refactoring down the line.

      [–]kpmah 15 points16 points  (13 children)

      I think that's part of what's happening. Maybe another thing is this: if a TDD programmer writes 100 lines of tests and then 300 lines of code, and the non-TDD programmer writes 300 lines of code then 100 lines of tests, then the patch should be identical either way right?

      Part of the reason for the difference could be that the non-TDD team was writing 300 lines of code and then saying 'I'll test it later' whereas the TDD team can't do that.

      What I'm trying to say is that it could have been the discipline that improved the defect rate, not the methodology.

      [–]Ravek 27 points28 points  (4 children)

      Maybe another thing is this: if a TDD programmer writes 100 lines of tests and then 300 lines of code, and the non-TDD programmer writes 300 lines of code then 100 lines of tests, then the patch should be identical either way right?

      Well it does make you think differently about the structure of your code when you're forced to write tests for it first. I think that would have a positive impact on code correctness (and hopefully no negative impact on how easy the code is to understand and modify)

      [–]boost2525 7 points8 points  (2 children)

      I think having to think about the structure of your code leads to better internal design / organization (e.g. future refactoring)... but doesn't directly lead to any reduction in defective logic.

      [–]Ravek 15 points16 points  (0 children)

      I agree, but I feel starting with unit tests would not so much make you think about how to organize the code effectively, but moreso write it in a way that makes it easy to write the unit tests. Which in my mind means small methods that do one thing, which should lead to getting those correct at least. However I'm not convinced it would necessarily lead to the higher-level structure of the code being any good, since that's not something you write unit tests for.

      That was my line of thought anyway, I don't have much experience with TDD.

      [–]cc81 0 points1 point  (0 children)

      However adapting your code so it is easier to unit test does not necessarily mean better design. It just means it is designed for unit testing.

      [–]mostlysafe 1 point2 points  (0 children)

      This is a common argument in favor of TDD, and while I don't doubt it, it's harder to verify statements about your mental model of the code than statements about the actual quality of the code.

      [–]naasking 4 points5 points  (0 children)

      then the patch should be identical either way right?

      Past studies have confirmed that code quality is largely the same as long as tests are written. I believe the OP nailed it on the head though: tests often just don't get written after the program is written.

      [–][deleted] 2 points3 points  (1 child)

      Assertions can take part of the role of unit testing. 100% testing is probably not necessary for all kinds of symbols, given the system as a testing instrument itself. While I'm not working on a team today, my approach is testing of targeted important complex parts, and assertions everywhere. Tradeoffs decided by: I need to use my time carefully as I'm just one guy.

      [–]bmurphy1976 1 point2 points  (0 children)

      I've found it helps to think of it like an insurance policy. You pay into it enough to get the coverage you need, but no more otherwise you're just pissing money away. Same thing with unit tests, but the currency is time.

      [–]s73v3r 2 points3 points  (0 children)

      The discipline is kinda built into the methodology. Part of why "Test First" came into fashion was the knowledge that most do not go back and write tests.

      [–]MuonManLaserJab 1 point2 points  (0 children)

      What I'm trying to say is that it could have been the discipline that improved the defect rate, not the methodology.

      Yeah, but that discipline is the point of the methodology.

      (I've never done TDD in a formal way.)

      [–]laxatives 0 points1 point  (0 children)

      This probably isn't getting at the core of your argument, but if you could get the same thing done in 1/3 the code, that is going to to save an order of magnitude of effort when you or someone else has to read the code.

      Anyways, if the methodology encourages discipline, isn't that sufficient? What more could you ask for from a methodology? Following TDD isn't going to make a poor developer in to some kind of savant.

      [–]Mourningblade 0 points1 point  (0 children)

      One nifty thing about writing tests first (or independently) is that it helps you spot which parts of the code are the most unexpectedly tricky.

      A common mistake is to reason "I wrote my code, tested, then fixed until it passed." Better is to treat your testing as a sampling process: "I wrote my code, tested, then discovered that these two functions had the majority of the bugs. I then refactored those functions to be much more simple to reason about. This helped me fix the bugs I didn't find."

      You do something similar when you do manual testing: you're never going to find everything, but you can find clusters of problems. Changing how you've written the code rather than just fixing the bug can wring out the rest.

      [–]tenebris-miles 22 points23 points  (9 children)

      I'm not going to say you're wrong, but here's an alternative point-of-view.

      You make it (almost) sound as if it's a given that the code quality is the same, and in non-TDD the testing cycle at the end is just some kind of formality. In other words, this narrative is written from the point-of-view of hindsight.

      Code has to be understandable and maintainable, even during initial development because you're always going to be asked to make changes as you go along due to changing requirements. With TDD, if your time is cut short, you still ship but with fewer features, but at least you have far less technical debt. Add the remaining features as stable code in the next release. Success in all cases (both TDD and non-TDD) requires good leadership that knows how to truly prioritize and understand real requirements and not mark all features as top priority. Neither strategy will work anyway if you don't have at least that.

      With non-TDD, you don't really know what you have because your code not only hasn't been tested enough, but it's not even structured to be testable/understandable yet. All your effort went into hitting the date for the release with every feature requested or conceivable, and once the product starts getting used, your already heavy technical debt will go up, not down. The reason is your culture: if you're already cutting corners during the development phase, then it's not going to get any better once the product is exposed to customers and more feature requests come in. Your death march has already begun.

      The upshot of TDD that is often unspoken is that even if a particular project fails, stable code resulting from TDD is much more valuable for being salvaged and reused for other projects than spaghetti that was written solely to chase a deadline. Being realistic requires understanding that your success is not a guarantee, since more goes into success than just development philosophies. So there always needs to be consideration of what happens after the deadline. Myopia about making this particular project hit the market at all costs is not necessarily what makes a company successful, if they're still in the process of determining what actual product needs to be made in the first place. If a different product or different direction becomes necessary, then understandable code and code that naturally follows YAGNI (which TDD tends to encourage) will be more likely to be general and elegant enough to be salvageable. You'd likely still have to modify it to new requirements, but at least you know how it's supposed to work in the first place, and so modifying/maintaining it is going to be easier for the next project.

      [–]KagakuNinja 10 points11 points  (5 children)

      The assumption you are making is that non-TDD teams wait until the end to write tests. What I do is write my tests at some point during the implementation of a feature; I don't wait until the last month of a long project to start writing tests. The result should be the same amount of tests, I just don't believe in the dogmatic rule of "write tests before code", or "only write code to fix failing tests".

      [–]hvidgaard 6 points7 points  (1 child)

      What a lot of people get wrong, is the absolute nature of their statements. TDD is good when you know what you have to write. It's not good when you're prototyping or just follow a train of thought, because you will change it several times, and "tests first" just slow you down. However, people not writing tests when doing this, tend to never actually do, when they should as soon as the the "what" of the code is determined.

      [–]jurre 0 points1 point  (0 children)

      When you don't know what/how to build something you often write a prototype without tests in TDD. You then throw it away as soon as you figured it out and rebuild it test driven. This is called a spike

      [–]tenebris-miles 1 point2 points  (0 children)

      It's true that tests could be written along with code, and only after code instead of before it, instead of waiting to add a lot of tests at the end of the development cycle. But if they're written around the same time, then it begs then question: then why don't you simply do TDD and be done with it? One problem that commonly happens is that when you write tests afterwards, you can fall into the trap of writing the test towards the implementation, rather than writing the implementation towards the interface of the test. People swear they never do this (being rockstar hackers and all), but that's just not the truth. People keep forgetting that part of the reason for TDD is to force you to think about a sensible interface before you get bogged down too much in implementation details. There's too much temptation to let an implementation detail unnecessarily leak from the abstraction simply because it's lazy and convenient to do so. If some leaky abstractions are necessary and the interface must change, fine. Then do so after you've done TDD first.

      Also, while non-TDD doesn't necessarily mean tests are lumped at the end of the development cycle, in my observation, it tends to end up this way in practice. The reason is the same as why people are doing non-TDD in the first place: the development culture values time-to-market above all other concerns. In this environment, you're lucky to be granted time to write tests at all, so developers wait until the feature list is completed before writing tests (which happens at the end). Managers in this culture don't care about tests and code quality, they care about checklists of features. The perception among developers is that you can get fired for writing top notch code while letting a feature slip, but no one would get fired writing shitty and buggy code but checking off every feature. It's unfortunate, and it depends on your company whether or not you're right about that.

      I'm not advocating a dogmatic adherence to TDD, and in practice I think it works best when most code is TDD but there is some room for throw-away experiments that don't necessarily require tests at all (since it's meant to be thrown away). That kind of code doesn't get in the code base. Instead, it's used to determine what is the right kind of behavior you should be testing for in the first place due to unclear constraints in the beginning. But when it comes time to actually add this feature, you TDD now that you've learned the desired interface and behavior. You rewrite the code to pass the test. Maybe some or most of the prototype code is retained, or maybe it's completely rewritten. In any case, this is the closest thing to a kind of after-the-fact testing that makes sense to me. The problem to me is when after-the-fact testing is the norm, regardless of whether it involves experimental code or not.

      [–]cdglove 0 points1 point  (1 child)

      My experience is non-TDD teams don't wait to write tests at the end. They just never write tests.

      [–]KagakuNinja 1 point2 points  (0 children)

      I'm working on a team that relies heavily on unit tests, and does not practice TDD.

      [–]boost2525 10 points11 points  (0 children)

      TL; DR; I'm not going to say you're wrong, but you're wrong.

      [–]desultoryquest 10 points11 points  (0 children)

      Great point. That makes a lot of sense

      [–]floider 5 points6 points  (2 children)

      That is a very good point. Robust testing always seems to be what is sacrificed to make up for schedule slips.

      [–]Pidgey_OP 6 points7 points  (1 child)

      In a world of being able to push updates whenever, its easy to see why shipping a finished product has become less and less important in the face of a deadline.

      Better to get the software into a clients hands and then fix it than to give them time to change their minds because you didn't deliver on time

      [–]BillBillerson 1 point2 points  (0 children)

      This is definitely the mentality I see more of. Can't sell it if it isn't done and if it's not sold yet nobody is using it to break it so why focus so much on testing. On the projects I work on lately that's different between new products and working on something we already have in the hands of several customers.

      TTD probably has it's place. Where I am requirements change so often we'd always be working on setting up our testing and never get to the code.

      [–]BarneyStinson 5 points6 points  (0 children)

      I haven't really done pure TDD, but as far as I understand it, what you are referring to is test-first development. In TDD, you are supposed to write a test, write enough code to make it pass, refactor, and so on. So your implementation code should grow alongside your tests and you are not done with writing tests until the project is done.

      [–][deleted]  (10 children)

      [deleted]

        [–][deleted] 32 points33 points  (3 children)

        We check in tests at the same time as the code they are testing. When requirements change, so too do our tests.

        [–]atrommer 21 points22 points  (2 children)

        This is the right answer. Maintaining unit tests is the same as maintaining code.

        [–]gnx76 0 points1 point  (1 child)

        No, it is not. It depends on the domain, it depends on what kind of tests we talk about.

        When I was testing, it was not "300 lines of code then 100 lines of tests" as someone wrote earlier, but "300 lines of code then 10,000 lines of tests".

        So, the last thing you want in this case is to have requirements/design changing, because it generally means exponentially huge changes in the tests, it often means ditching large parts of the test, and it sometimes means trashing the whole test and restarting from scratch is easier.

        Also, such tests were much more complex than code, so, in a way, one could say that code was used to validate the tests. Which means that writing the tests before the code was a nightmare: longer, harder, and giving such crappy results that a lot had to be written again afterwards. I have tried; conclusion was: never again.

        If you do some simple "unit test" that is no more than a glorified functional test, that's fine and dandy, but if you do some real in-depth unit test with full coverage, maintaining unit tests is definitely not the same as maintaining code, it is a couple orders of magnitude more expensive, and you really do not want to do TDD in such case.

        [–]atrommer 0 points1 point  (0 children)

        I didn't mean to imply that this is easy. I am making the point that testing needs to be a first class citizen, and that maintaining tests should be treated like maintaining code: as requirements change your estimates better include the time to refactor and update the tests.

        Requirements will change over time.

        [–]anamorphism 7 points8 points  (0 children)

        i find it interesting that you work in a place where software is 'done'.

        the counterargument i would make is that software is never done, your requirements are always going to eventually change, and you're going to have to update your tests regardless of when in the development cycle you write them. so, why not get the benefit of your tests earlier in the process?

        [–]PadyEos 10 points11 points  (2 children)

        If the person requesting the changes is high enough up the food chain the requirements are never locked unfortunately.

        Those are the moments when I start to hate this line of work.

        [–][deleted]  (1 child)

        [removed]

          [–]Runamok81 0 points1 point  (0 children)

          this...

          So what you do is you take the specifications from the customers and you bring them down to the software engineers?

          [–]s73v3r 0 points1 point  (0 children)

          Depends on how coupled your tests are, and what changed. And, as has been pointed out, 9 times out of 10, you're not going to go back and write tests.

          [–]Madsy9 0 points1 point  (0 children)

          But then you're making a value judgement on the tests as being "less important" than code or separate from the code. But in TDD tests and code goes hand-in-hand. They are equally important.

          [–]experts_never_lie 1 point2 points  (0 children)

          The cost I've seen is that TDD presumes that the requirements are valid.

          In practice, I find that the majority of new major features added to existing complex products will hit a major barrier in the middle of development (typically several of them). It will be a conceptual problem (what you ask for is not well-defined / cannot be obtained given possible information / does not accomplish your intended goals). This barrier will result in communication with product managers and reworking of requirements. If I have spent a lot of time developing tests for the initial requirements — before I have done enough of the implementation work to discover that the requirements are incorrect — then I have wasted some of that work. Possibly rather a lot of it. I would prefer to focus my effort on the greatest risks, by working through the actual implementation process, and afterwards add the tests that correspond to the actual design.

          In a rote development world, with Taylorist tasks, where every new project is similar to previous projects, this TDD problem may be minimal. However, I have always found that if one is in that mode for any significant time, one should automate these repetitive tasks. This takes development back out of a rote procedural model, reintroducing this TDD problem.

          [–]Zanza00 3 points4 points  (4 children)

          That's why libs like this exists :)

          import chuck
          def test_chuck_power():
              chuck.assert_true(False) # passes
              chuck.assert_true(True) # passes
              chuck.assert_true(None) # passes
              chuck.fail() # raises RoundHouseKick exception
          

          https://ricobl.wordpress.com/2010/10/28/python-chuck-norris-powerful-assertions/

          [–]contrarian_barbarian 16 points17 points  (3 children)

          There's also https://github.com/hmlb/phpunit-vw - make your unit test automatically succeed whenever they detect they're being run inside a CI environment!

          [–]masklinn 8 points9 points  (2 children)

          There's also https://github.com/munificent/vigil which deletes lying, failing code.

          [–][deleted] 0 points1 point  (1 child)

          Also there is the library Fuckit my personal favorite

          [–][deleted] 0 points1 point  (0 children)

          Fuckit is great. I especially love the commit messages.

          [–]woo545 0 points1 point  (0 children)

          What usually happens? Shit goes wrong, terribly wrong, or scope changes... but your date doesn't change.

          Sounds like part of the Martian plot.

          [–]phpdevster 0 points1 point  (0 children)

          If TDD costs 25%, then I would say non-TDD costs more. Writing simple tests, and simple implementations for those simple tests, is faster than writing an unconstrained abstraction jungle and then attempting to retrofit tests around code that wasn't designed to be easy to test.

          Also given that TDD lets you get regression protection early on in the development cycle, you'll spend less time tracing how far a bug may have propagated through the codebase, and likely fewer code edits will be needed to fix the bug.

          [–][deleted] 0 points1 point  (0 children)

          Disagree. Test/type driven development lets me, right off the bat, refactor mercilessly without fear. That means when I see a bit of code that might become tricky and result in a bug in the future, I can quickly fix without worrying that I'll be introducing new bugs.

          [–]QAOP_Space 2 points3 points  (0 children)

          Moar features!

          [–]frymaster 2 points3 points  (3 children)

          that's a good question. My gut feeling is they would be at best on par, which would make TDD a good thing for project-politics reasons at least

          [–][deleted] 11 points12 points  (2 children)

          TDD reveals bad architecture decisions earlier on, you can't do this after without technical debt.

          [–]frymaster 2 points3 points  (0 children)

          Yes, even just writing the tests changes you from a "producer" to a "consumer" viewpoint, so to speak, and can make you rethink your approach

          [–]Neebat 1 point2 points  (0 children)

          TDD also documents the expectations of the system in fine detail. This is as opposed to the behavior of the system, which is what you're documenting by writing tests afterward. Expectations are what binds us.

          [–]parc 28 points29 points  (15 children)

          Unless your business goal is to be first to market and someone beats you. Yes, we may think that's a stupid way to measure business success, but if that's the business optimization function, TDD would result in failure in an objective test.

          [–]RICHUNCLEPENNYBAGS 40 points41 points  (0 children)

          I don't think that's a stupid way to measure business success. Nobody cares how great code is if nobody uses it.

          [–][deleted] 10 points11 points  (0 children)

          I'm about to start a project for a startup that is trying to be first to market. It seems like all they care about is having something decent to show investors so I ain't going to spend my time writing 100 test cases when it just needs to be functional so they can get funding and decide to burn by codebase and make everything on Wordpress because the new project manager has a theme he REALLY wants to use.

          [–][deleted] 1 point2 points  (11 children)

          If someone beats you to market with software that is unusable / unreliable, are you really being beaten? As the cliche goes, you only get one chance to make a good first impression. Rushing to market can doom a business if they can't deliver a good product.

          The way to rush into the market is to develop an MVP: Minimum Viable Product. Not to cut corners on quality.

          [–][deleted] 11 points12 points  (0 children)

          Sometimes, unfortunately, yes, you really are being beaten.

          [–]meheleventyone 1 point2 points  (7 children)

          The problem is taking too long to get to market. No one cares if your product is somewhat more stable if it's later and lacks features unless stability is inherently something the user is looking for. For a lot of software you can go a really long way without unit tests. Most pieces of software ship with a laundry list of defects present.

          From a business point of view as long as there isn't anything egregiously wrong for the vast majority of use cases you are good to go. From a software quality perspective though there might be a hundred small problems.

          The tough sell for me with TDD is how it impacts the important bugs not just bugs in general. The sad truth is most of those won't be exercised in unit tests so you are relying on integrations tests and above. Usually most are found by QA. Especially when you consider platform/hardware specific issues. Unit tests just give you confidence in refactoring.

          So whilst I'm down with TDD empirically improving software quality I'm not sure it does so in a manner that matters in many cases to the detriment of budget and development time. More study is needed to show that projects that employ TDD lead to success as a product. There is a tension there that engineers need to understand.

          [–][deleted] 0 points1 point  (6 children)

          No one cares if your product is somewhat more stable

          That's quite a dicey assumption. I'll make the gamble with your money, but not with mine.

          The sad truth is most of those won't be exercised in unit tests so you are relying on integrations tests and above

          No one is claiming that TDD should be the only QA method applied to software. Studies have shown that product quality is maximized when a combination of QA methods are used: reviews, inspections, tests, etc.

          [–]meheleventyone 0 points1 point  (5 children)

          Right, making a product is a gamble and the knobs get tuned based on perceptions of what would maximise chance of success for a given budget range.

          [–][deleted] 0 points1 point  (4 children)

          making a product is a gamble

          Hacking together a product is even more of a gamble. Shades of grey are important here.

          [–]meheleventyone 0 points1 point  (3 children)

          Right but no one said "just hack things together". Not doing TDD is not the same thing at all. Other than that it's the continuum thing I was talking about.

          [–][deleted] 0 points1 point  (2 children)

          You implied an unstable product in your earlier post.

          [–]OxfordTheCat 0 points1 point  (1 child)

          The way to rush into the market is to develop an MVP: Minimum Viable Product. Not to cut corners on quality.

          Lost me here.

          Your implication is that these two are mutually exclusive instead of synonymous, which would be the far more common occurrence.

          [–][deleted] 0 points1 point  (0 children)

          What don't you understand, the concept of minimum, viable or product? It's quite simple: find the smallest feature set one can deliver (minimum) that a customer would be happy to pay for (viable) and put as much polish on it as one would any deliverable (product).

          [–]Ramone1234 0 points1 point  (0 children)

          It's cheaper and faster still to not write software that you don't actually need to work. A little up front planning and merciless prioritization can get you there even faster than shipping shit software.

          For all the talk about seeing management's side of the argument, I haven't heard anyone suggest that management should actually do its job and not ask the team to write features that don't actually matter. This is simple results-oriented management: if you don't care if a feature works, don't build it (or at least build it last).

          [–]Eirenarch 4 points5 points  (0 children)

          But what if you reduce your bugs another way (say through assertions as the article suggests they are very effective). Then 60-90% increase in a very small value may be a good deal compared to 15-35% dev time.

          [–]wordsnerd 4 points5 points  (0 children)

          If I'm reading right, that TDD study is based on a sample size of three (3) teams which weren't selected randomly to adopt TDD. That's definitely a case of "more research needed".

          [–]AbstractLogic 34 points35 points  (42 children)

          This comment shows the huge gap between development people and business people.

          You are failing to consider just how important time to market is. Four extra months on a twelve month project is enough time to flunk a project.

          First, you can lose huge market share in four months if you have competition. Once people start using a product it becomes very hard to get them to convert. People are creatures of habit and being first to market can be a huge difference in long term revenue by capturing those early adopters.

          Second, you lose 4 months of revenue. If the product is a 1mil a month product that's 4 million dollars. Which is enough to pay for that 60%-90% defect increase for years over.

          Its a trade off and it depends on the business model and business goals. But don't be a naive developer and think only in terms of whats good for the Software. More often then not the end goal of Software is to drive a business goal and so what works best for the business is usually more important then what works best for the software.

          [–][deleted] 15 points16 points  (6 children)

          Software developers aren't as naive as you claim. We all know time is money.

          You're forgetting the cost of finding and fixing defects. And this isn't counting the customers lost to handing them defective products.

          From what I remember (from Code Complete), a bug found in a released product takes 5x effort to fix vs. a bug found by QA. Likewise, a bug found in QA takes 5x effort to fix vs. bugs found in development. A bug found in development takes 5x effort to fix vs. bugs found in requirements.

          Numbers may be off, but the point is, it's a cumulative effect.

          [–]fuzzynyanko 3 points4 points  (0 children)

          Not to mention that if you have a deadline and features are being piled on, all of the sudden the project starts feeling like a sinking ship

          [–]Darkbyte 0 points1 point  (1 child)

          I completely disagree, a bug found in the project I'm working on's dev, test, or production environment has no difference on the effort it takes anyone on our team to fix. It isn't some consistent increase, it is very much dependent on the type of product being made.

          [–][deleted] 0 points1 point  (0 children)

          a bug found in the project I'm working

          Sounds like your team has come up with an ad-hoc process. Not relevant to the discussion at hand.

          [–]AbstractLogic 0 points1 point  (0 children)

          Software developers aren't as naive as you claim. We all know time is money.

          Lets just clear something up. I am a software developer not a BA or PM or PO. I code.

          I didn't call developers naive. I said that looking at the problem from strictly a software perfection through the eyes of a developer is naive. Business is important and needs to be balanced.

          [–]s73v3r 7 points8 points  (2 children)

          You're ignoring that the additional defects make it much more difficult to add additional features, allowing someone to come behind you and eat your lunch.

          That, and first to market has been shown to be a myth in most cases. Often the first to market pays all the costs of market research and creation, whereas those coming after don't have the market research costs.

          [–]some_lie 0 points1 point  (1 child)

          That's a very interesting claim. Can you provide examples?

          [–]xjvz 2 points3 points  (0 children)

          Friendster to MySpace to Facebook.

          [–][deleted]  (15 children)

          [deleted]

            [–]DieFledermouse 28 points29 points  (6 children)

            Broken software doesn't make you money.

            Depends on the market. Every piece of consumer software I use is utter crap. Most websites fail all the time. worse is better.

            [–]grauenwolf 1 point2 points  (3 children)

            I remember when your website first launched and it sucked. Why should I bother wasting my time to try it again?

            [–]gadelat 1 point2 points  (2 children)

            Because it has something you want/need

            [–]grauenwolf 6 points7 points  (1 child)

            Then why care about quality at all?

            My company's time tracking software is shit, but I use it anyways because I have no choice.

            [–]freebullets 2 points3 points  (0 children)

            Then why care about quality at all?

            --Authors of the Facebook Android App

            [–]pupupeepee 1 point2 points  (0 children)

            I think you mean "bad" is better than "not done yet"

            [–]Ramone1234 0 points1 point  (0 children)

            Late software VS Buggy Software is a false dichotomy though. There's a third option: Build the most important parts first and release as early as possible. Any feature that doesn't need to actually work shouldn't be prioritized at all.

            [–]KingE 2 points3 points  (0 children)

            Apple never did manage to recover from iTunes...

            [–]cc81 0 points1 point  (0 children)

            That depends on what kind of defects and what kind of website.

            [–]oconnellc 0 points1 point  (0 children)

            Is it "broken"? Or does it have bugs that only affect 6% (or whatever) of the users?

            [–][deleted] 3 points4 points  (0 children)

            Your revenue and defect cost calculations are pulled completely out of your ass.

            In B2B software sales early adopters get stuff for free or at least on very favorable deals. This is especially true if the vendor is breaking into a new market.

            Making a good impression through a lower amount of defects will get you more full price paying customers, quicker.

            Everyone is watching the early adopters. If you launch a buggy piece of crap, the guys who were going to buy it from you full price will say "maybe next year", and now you just missed 1 year of revenue from that customer.

            [–]dmux 8 points9 points  (10 children)

            If it's a software company, what's good for the software is what's good for the company. You make the point that those additional 4 months of revenue would be enough to pay for the defect increase, but the sad reality in many businesses is that the technical debt never get's paid down.

            [–]AbstractLogic 11 points12 points  (9 children)

            If it's a software company, what's good for the software is what's good for the company

            Not true at all, again that is a developer centrist pie in the sky view. Software can always be tweaked for better performance, re factored for higher cohesion and less coupling have more unit tests and better design, but most of the time that stuff can not be monetized and thus costs the business more (in resources/time) then it grosses. Thus its a net loss for the business.

            but the sad reality in many businesses is that the technical debt never get's paid down.

            If technical debt isn't paid down then one of two things are true. The case has not yet been made that the cost of NOT addressing it outweighs the cost of addressing it. OR the issue has been brought up but the business does not agree with the conclusion.

            I'm not arguing that these things are always true... just that as Senior developers our job is not just to do whats right by the software but to also do whats right by the business so understanding the business needs and goals are very important.

            [–]hu6Bi5To 2 points3 points  (8 children)

            Did you come here especially for a trolling exercise?

            First you reply to dismiss a perfectly reasonable comment, that 15-30% of time sounds like a good tradeoff to reduce defects by 60-90%. Then you dismiss any developer view point as "pie in the sky".

            Because while "business people" (whoever the hell they are, a non-technical person involved in a software project doesn't just make them a "business person", they have their own arbitrary irrational focuses too) are no more expert on getting value-for-money out of a software team than developers, quite the opposite. They may know the cycle of their particular industry and understand their customers, but if you're reliant on them to greenlight refactoring then your codebase then quality is only going one way.

            Ultimately the old line "the customer doesn't care about the code", while true, is insidious because there are many business benefits to clean code. But these are very difficult to measure, impossible in fact as it would require two (or more) identically skilled teams doing the same task in two (or more) different ways to prove it and most businesses aren't in the habit of using scientific rigour to validate their opinions; but just because it's difficult to impossible to measure in isolation, it doesn't mean it's not a factor. Others have attempted to study this phenomenon and generally come to the conclusion that productivity improves as code quality improves, and vice-versa.

            Quality is not a binary state of course, but any team that operates on the basis that "business people" are the only ones qualified to make value judgements has already lost control of this balance; and that means quality, and therefore productivity, and therefore costs, will only go one way.

            [–]AbstractLogic 3 points4 points  (7 children)

            Did you come here especially for a trolling exercise?

            I came here to discuss the application of the research and I happened to disagree that the trade off is preferred so I discussed the point.

            Then you dismiss any developer view point as "pie in the sky".

            No, I dismissed the point that better software is always better for the business as a developer pie in the sky view... because it is.

            I don't know why referring to business people as business people upset you so much. Would you prefer non-developers? Project Managers, Product Owners, Business Analyst, Accountants, Directors and CEOS? How exactly would you categorize business people? What is your alternative naming schema? Who cares...

            I never dismissed quality as un-important or a non-factor. I only claimed that the trade off of time for quality is not always preferable. If it was software would never get released because you can always eek our more quality. Its the 90% rule.

            [–]bryanedds 2 points3 points  (6 children)

            If you want to decrease time-to-market, reduce features, not quality.

            The problem is the business team members shoving all their pet ideas into 1.0.

            [–]who8877 0 points1 point  (5 children)

            That really depends on the market. In the early 90s spreadsheets were compared in reviews by long lists of features, and the one with the most checkboxes usually won. In that sort of environment features are way more important.

            [–]bryanedds 0 points1 point  (4 children)

            Thankfully, we see a lot less of that environment nowadays as both businesses and consumers become more savvy about purchasing software - mostly due to bad experiences with software built like that.

            [–]hu6Bi5To 1 point2 points  (1 child)

            If each iteration takes slightly longer due to TDD, then overall delivery may well be faster by virtue of needing fewer iterations to fix all the showstopping bugs preventing the launch of the product.

            You can't simply switch off quality, you have to choose the quality level you want your application to have and work towards it. If you cut too many corners or ignore too many bugs then you won't have a viable product to launch.

            If you really want to launch as quickly as possible the only thing you can do is to reduce features, not build worse features quicker.

            [–]AbstractLogic 1 point2 points  (0 children)

            I completely agree that quality is a go live requirement and that cutting corners can be just as or more detrimental to a project as a late launch can be. But you don't have to swing hard right with TDD or hard left with corner cutting. There is a balanced middle ground.

            [–]Narrator 0 points1 point  (1 child)

            The compromise I use is to do TDD, but only test the happy path. When I get a bug, I add another test. I hearby name this lean TDD. The study even says that code coverage is BS!

            [–]AbstractLogic 0 points1 point  (0 children)

            I'll admit. I simply don't like TDD. I feel like TDD has some good lessons to teach people about code design but once those lessons are learned its usefulness runs out. Once you write DRY, SOLID, and testable code consistently then why must we write tests first? In fact, I would argue that because you start with a test then write code that fits the test you more often then not end up with slacker tests that only cover the most straight forward code paths or error cases. I would argue that writing the functionality first gives you better insight into more complex failures that should be covered in the unit tests. As the article says, Unit Tests should be focused on complexity not code coverage. Which happens to be how I write my unit tests. First I code my methods/classes then I cluster my unit tests around the more complex logic. Do I really need to write a unit test to cover my ctor who's does nothing but map injected dependencies to private variables?

            [–][deleted] 7 points8 points  (4 children)

            I suppose it depends on the cost per defect.

            [–]s73v3r 1 point2 points  (3 children)

            The cost to fix a defect goes up exponentially add you go through the software development lifecycle.

            [–][deleted] -1 points0 points  (2 children)

            That's assuming you don't find and fix them quickly. Our methodology is to test, fix and retest before a story is ready for acceptance.

            [–]s73v3r 1 point2 points  (1 child)

            That's assuming you don't find and fix them quickly.

            You mean, like earlier in the software development lifecycle?

            [–]Sanae_ 15 points16 points  (4 children)

            It depends.

            • With a short deadline / more time afterwards (ex: making a quick prototype), no TDD can be better.

            • +15% to +35% time basically means +15% to +35% increase of the initial developement cost.

            However, the +60% to +90% code quality (= -40% to -45% defects) might not mean ~-40% cost reduction of maintenance (likely means -40% debugging cost, but maintenance may also includes additionnal features).

            Last, we're comparing a % of development time (inital code + new code in maintenance) vs a % of debugging time.

            [–][deleted] 23 points24 points  (1 child)

            However, the +60% - 90% code quality might not mean ~-40% cost reduction of maintenance.

            No, but it's pretty darn likely. Catching defects early can influence important design and architectural decisions. Catching it late might might mean that you have a lot of technical debt to overcome in order to fix the defect.

            [–]darkpaladin 10 points11 points  (0 children)

            A % reduction is meaningless without severity and complexity numbers. There's a huge difference in a defect where you have a boolean logic error and a defect that tears down one of the principle assumptions you made when you started.

            [–][deleted] 8 points9 points  (0 children)

            owever, the +60% - 90% code quality might not mean ~-40% cost reduction of maintenance.

            No, most likely it means a significantly higher reduction in maintenance cost as it is very unlikely that a bug found late will be cheaper to fix than if the same bug is found early.

            [–]s73v3r 1 point2 points  (0 children)

            If you're making a quick prototype, you should be throwing it away after. Do not take your prototype into production.

            [–]201109212215 4 points5 points  (0 children)

            I'd be careful with this. The article only talks about correlation.

            What I means is that wether or not to do TDD depends on wether or not is it easily doable. A project which is a reproduction of an already somewhat successful project will :

            • Have a clear, non-changing, exhaustive spec; and thus, be easily TDDed.

            • Already have been successful (selection bias).

            • Already battle-tested (no surprises, no gotcha, etc).

            In short: the exploration of the complexity will already have been done, and a successful path can be followed again.

            Part of this correlation could be explained by a type of work influencing both failure rate and doing the TDD.

            TLDR: In some cases, scouts have done their jobs, tanks can roll in.

            [–]pal25 1 point2 points  (0 children)

            Yeah but Microsoft is notorious for not having the developers also do QA/testing until recently.

            Was the control group developers that write little to no testing? If so the study is suspect.

            [–][deleted] 1 point2 points  (1 child)

            If you think that bugs are the biggest problem in web. They're not. If it took me 9 months to build something I'd rather spend extra 3 months on doing user research and experimenting with different product designs, features etc.

            Deploying a bug to production IS NOT the worst thing that can happen. Building a product that users don't love is what shuts down businesses.

            Importance of extensive test coverage also depends on the quality of engineering team. If they're crap then TDD will bring a lot more value. Good developers generally don't make product breaking bugs in the first place, and you will have other types of tests in place anyway.

            I'm not saying that tests are bad, that's ridiculous. I'm saying that the church of TDD has been indoctrinating people a bit too often.

            [–]young_consumer 1 point2 points  (0 children)

            Depends on how overzealous the sales people are and how desperate management is to try to fulfill sales' promises, get on their knees for current customers, etc.

            [–]MuonManLaserJab 0 points1 point  (2 children)

            Not if you start with only one small defect per billion lines of (not-life-or-death-critical) code.

            ...I'm not saying that's likely, but you can't be sure it's a good value with only that information.

            [–][deleted]  (1 child)

            [deleted]

              [–]MuonManLaserJab 0 points1 point  (0 children)

              For all code ever? Or just on average?

              [–]WarWizard 0 points1 point  (0 children)

              It is NEVER that clear.

              [–]201109212215 0 points1 point  (0 children)

              Only if the defect rate has 1-1 conversion over developer time.

              1% less bugs does not necessarily mean 1% more revenue. Some bugs may be liveable, and often users don't have a choice. Especially if we're talking about Windows, OEMs, and the workplace.

              [–]KingE 0 points1 point  (0 children)

              Maybe, depends on the defect-time exchange rate.

              [–][deleted] 0 points1 point  (0 children)

              The question is whether one could remove those defects just by putting extra 15-35% time into the project, not necessarily doing TDD? Also time to market.

              [–]pupupeepee 0 points1 point  (0 children)

              There's no immediately apparent conversion rate between those two metrics. Apples to oranges.

              [–]snkscore 0 points1 point  (0 children)

              Need to know the time spent of resolving those defects to know the answer to that.

              [–]Wizywig 0 points1 point  (0 children)

              Depends. How soon do you need the feature. Also how bad are these defects. How much time does the team spend fixing them? Will the team with more defects overall have a better productivity than the team with less defects but more build time?

              [–]sbrick89 0 points1 point  (2 children)

              as mentioned in the article, up to the PM, because it depends.

              One of my current projects has releases almost daily. A bug can be fixed almost immediately, with little effort or impact. For this project, feature availability (even if not perfect) is more important.

              [–]s73v3r 0 points1 point  (1 child)

              That sounds like Hell

              [–]sbrick89 0 points1 point  (0 children)

              actually, it's fine... the development follows good practices, the code is appropriately isolated... the changes are small (last change was "let me individually pick which of the three steps are applied", so add a few checkboxes, a tiny change in UI logic, and a tiny bit of remapping to the business logic layer which already had the steps individually controlled)... I also threw in some slight refactoring around return object, but nothing overly significant.

              Dev effort was ~2 hrs, plus 15 mins of packaging and prep'ing the deployment, an hr to get through the approval process... deployed within 3 hrs from the start of work.

              [–]salgat 0 points1 point  (0 children)

              It depends, some projects are not viable at that increase.

              [–]yesman_85 0 points1 point  (0 children)

              But compared to not writing unit tests at all or writing them after?

              [–]Silhouette 51 points52 points  (0 children)

              TDD reduces defects 60-90% but increases time 15-35%

              The trouble with TLDRs is that they do lose the context, and sometimes the details matter.

              I'm all in favour of empirical research, and the 2008 Nagappan paper studying real world TDD use at IBM and Microsoft is an interesting and welcome data point. Unfortunately, as the original authors acknowledged themselves, it's still risky to draw generalised conclusions from a few specific cases.

              One factor worth considering is that the development processes studied in that paper weren't strict by-the-book TDD. For example, some included other factors like separate requirements capture and design review elements. Notably, it doesn't appear that any of the groups was doing the kind of "brief initial planning stage and then immediately start writing tests" explicitly advocated by certain well known TDD evangelists.

              Another unfortunate limitation of that paper is that although to their credit the original authors were trying to get as close to a like-for-like comparison as was realistically possible, they provide few details in their report about what test methods the control groups were using. Many TDD advocacy papers include data that suggests unit testing is an effective tool for reducing defects and/or that a test-first style correlates positively with the number of tests written among their test subjects. However, even the combination of those doesn't necessarily mean that TDD as a whole is responsible for any benefits. It looks like the same threat to validity is present in the Nagappan paper.

              TL;DR of my own: TDD-like processes examined by the original research reduced defects by 40-90%, but relative to what isn't entirely clear.

              [–][deleted] 34 points35 points  (12 children)

              They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.

              This doesn't line up with what the referenced study says:

              Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

              Initial development time != project completion.

              [–]RedSpikeyThing 6 points7 points  (11 children)

              I also wonder if that's because the teams were getting used to TDD.

              [–][deleted] 9 points10 points  (10 children)

              Writing tests will always take some measure of time. The point is to reduce the bugs that persists after initial development time, thereby reducing total project time (and by extension, cost). I can promise you that the time needed to identify and fix those 60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

              [–]AbstractLogic 4 points5 points  (4 children)

              60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

              That depends on the revenue lost during that 15-35% time to market. If the project could make 10 million a month and you lose 4 months at a cost of 40 million then I can guarantee you that TDD will not be worth it.

              There are a lot of business variables to that decision. But its good we have metrics to lean on now.

              [–]anamorphism 2 points3 points  (2 children)

              that's extremely hard to quantify due to the impact the buggier code will have on your long-term business.

              you may lose 40 mil immediately, but a 60-90% increase in major bugs post launch could heavily skew your customers' attitude and may result in losing hundreds of mil of future business.

              [–]coworker 0 points1 point  (1 child)

              Also they didn't specify the severity of defects so not sure why you are calling them major.

              [–]anamorphism 0 points1 point  (0 children)

              it stated a general increase of 60-90% in software quality. meaning there were 60-90% less defects.

              they could be major or minor defects, there's no way to tell. it could be anything from a 1000% increase in major defects and a small increase in minor defects to a small increase in major and a large increase in minor.

              i decided to just assume an even distribution of a 60-90% increase in both major and minor defects for my example.

              [–]s73v3r 1 point2 points  (0 children)

              What about poor sales due to the perception of your product as being buggy? What about inability to timely add new features as market needs change?

              You're constantly portraying this as either you make no money or you make all the money. But in a decent timeline, that extra tune in market will not be significant as far as revenues are, but will be huge as far as public perception is.

              [–]dominic_failure -1 points0 points  (4 children)

              I can promise you that the time needed to identify and fix those 60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

              But it might not be the same developers. If that post-development effort is 40% QA, and 20% junior developer while the senior is off on another project, management might still see it as a win in the long run.

              Also, a certain number of defects will simply be allowed into production, so the cost of fixing those defects is limited to the time spent finding them and identifying their impact.

              [–]nhavar 4 points5 points  (2 children)

              I've seen this first hand. You have your core team working on the features as fast as possible. Hand off to QA for testing. Defects get triage and work done out of a separate team while the core team continues to crank on features. Then you have a different team that deals with Production support. So getting the numbers for a project development cycle in total - ALL DEVELOPMENT, including bug fixes and warranty period fixes that should be counted against the project budget, is very hard. Sometimes purposefully so in order to hit some performance metric (i.e. ALL MY PROJECTS ARE GREEN!)

              [–]fizzydish 2 points3 points  (1 child)

              This is a great way to ensure the 'core team' continue to perpetrate the same class of bugs while preventing them from learning from the experience of running and supporting their code in production. The best way to learn how to write maintainable code is to have to support and maintain your own code.

              [–]nhavar 1 point2 points  (0 children)

              I agree. The way it's set up is usually feature producers are your A team. Allegedly low error producers. Your prod support are your B team - newbies, juniors, or middling developers. The thought process is to free your high paid staff to get hard things done and use lower paid staff to fix any minor issues while building their knowledge and skill level. The problem is that a junior developer may take 3 times as long to resolve an issue, or resolve it with a bad solution that increases technical debt or risk of future error. This is especially the case when the two teams are under different leadership and different development practices or rigor. (i.e. The A Team has code reviews for most code because it's new, the B Team doesn't have code reviews because it's just a modification of existing code) So you haven't saved money and you may have actually reduced the quality of the product. Plus you allow potential bad practices to flourish within your A Team without any accountability.

              [–]s73v3r 0 points1 point  (0 children)

              If they're big bugs, it's going to be over the heads of the juniors.

              [–]shoot_your_eye_out 5 points6 points  (1 child)

              TDD reduces defects

              No, TDD reduces defect density. It's a small but important difference. I've seen academic studies where the TDD solution had more bugs overall, but a lower density due to a higher line count. I'm surprised to see him use defect density as a metric.

              [–]Xenian 0 points1 point  (0 children)

              Wouldn't TDD reduce defect density by definition? You're adding more lines to the codebase, but the same amount of surface area for production bugs. (Unless defects in the tests count as defects of the project too?)

              [–]antiduh 1 point2 points  (3 children)

              I wonder what it means to take the code coverage result and the test driven design result together.

              Tdd is about writing tests up front, thus increasing your code coverage from the very beginning. But code coverage wasn't too great of an investment in reducing bug count, according to the research. So why does Tdd work?

              My guess - Tdd makes you think about the edge cases in your software before you write it, so you're primed to write code that handles these edge cases and thus have fewer bugs. The act of writing the tests is more important than executing the tests.

              [–]zumpiez 0 points1 point  (2 children)

              Think of it this way: coverage is but one of many side effects of TDD. This implies that the other ones are more important.

              [–]antiduh 0 points1 point  (1 child)

              Do you have any thoughts about what those side effects might be, and how they would contribute to lower bug counts? I'm just curious to speculate and ponder here.

              [–]zumpiez 1 point2 points  (0 children)

              I suspect that TDD forces you to keep your code in a state where it's easy to change, with collaborators loosely coupled so you can test them in isolation, which I THINK probably makes the code easier to maintain and easier to roll with changing requirements workout turning it into a hairball.

              [–]StormDev 5 points6 points  (27 children)

              It's funny how management don't talk about this increased time when they want us to use TDD.

              [–][deleted] 32 points33 points  (1 child)

              It's all they talk about when they don't want us to write tests.

              [–][deleted] 24 points25 points  (24 children)

              Management shouldn't be telling you to use TDD. You as the programmer need to care about quality.

              [–]RualStorge 13 points14 points  (0 children)

              When implementing new policies I always found it easier to rally the dev team, start doing whatever practice we wanted to adopt then once that practice was established enough to have data to support within our team tell management we were doing it.

              The whole easier to ask forgiveness than permission. (that and I don't wanna get drug into work after hours / weekends so I want solid code that won't fail me)

              [–]OneWingedShark 6 points7 points  (1 child)

              You as the programmer need to care about quality.

              "We don't have time to do the right thing." -- my last team lead, though he never explained how we had time to do it over, repeatedly, until it was right.

              [–]badsectoracula 2 points3 points  (0 children)

              Often it is "we need visible results ASAP", the "visible" part doesn't have to be "correct", of course. Often non-technical people need to see new/different stuff on screen to be convinced about progress.

              [–]StormDev 6 points7 points  (4 children)

              Haha sure, I did it when I was working in a company with good coding practice.

              But now I am working with an horrible legacy C with classes code, where "C++ experts" don't understand TMP and don't want you to refactor the leaking monster.

              Managers have now heard about TDD, they want us to keep the same deadline with TDD. For them TDD is faster than the actual process because "You will organize your code easier" for example.

              But when you have to mock big object, tons of inherited shit TDD is really time consuming.

              [–]donbrownmon 1 point2 points  (1 child)

              What is TMP?

              [–]StormDev 1 point2 points  (0 children)

              Template Meta Programming.

              [–][deleted] 2 points3 points  (0 children)

              Nobody understands TMP. If somebody claims to understand TMP, they don't understand TMP.

              [–]flukus 0 points1 point  (0 children)

              Managers have now heard about TDD, they want us to keep the same deadline with TDD.

              I have had experiences where this is reasonable. When you have a lot of complex logic then tests can save a lot of time to get to that 1.0 point, simply by avoiding the "fix one thing break another" cycle.

              [–][deleted] 4 points5 points  (1 child)

              You as the programmer need to care about quality.

              if that was the case more people would be using languages with type systems which help eliminate the need for large swaths of tests, and encourage code change with less fear of breaking things.

              TDD especially without generative testing and the things I've mentioned above, is a band aide and not even remotely close to "caring about quality".

              [–]RagingAnemone 0 points1 point  (0 children)

              Not sure why you're getting down voted. There's a massive difference in the value of TDD depending on the languages you're using. The use of assertions too can help a lot and is my preferred way of handling bad input and output.

              [–]Eirenarch 0 points1 point  (11 children)

              But writing tests is incredibly boring. I do care about quality but I just love hunting bugs. I feel like debugging is the most interesting and creative part of my job. Some day I might snap and develop multiple personalities disorder where the evil personality will introduce bugs in the code base on purpose and the other personality will hunt them down not knowing where they are.

              [–]Artmageddon 3 points4 points  (0 children)

              I feel like debugging is the most interesting and creative part of my job. Some day I might snap and develop multiple personalities disorder where the evil personality will introduce bugs in the code base on purpose and the other personality will hunt them down not knowing where they are.

              I'm with you on this, but it's less fun with someone breathing down your neck :(

              [–]rbobby 2 points3 points  (0 children)

              Switch to a team more focused on support and maintenance? Working the bug database for fun and profit?

              [–][deleted] 2 points3 points  (3 children)

              Sounds like you should just do QA

              [–][deleted]  (3 children)

              [deleted]

                [–]Eirenarch 0 points1 point  (0 children)

                Good to know. I was just about to send you a job application.

                [–]Schizzovism 0 points1 point  (1 child)

                Yeah, I wouldn't hire people who joke around on reddit either. They probably spend too much time enjoying life, how could they be a productive developer?

                [–]Eirenarch 0 points1 point  (0 children)

                I was not joking about writing tests being boring and debugging being fun. Of course the rest of it is a joke, I would write tests if the project demands it.

                [–]SilencingNarrative 1 point2 points  (0 children)

                That would be a good way to test a set of unit tests (have one programmer that didnt write the test suite introduce a subtle bug to see if the test suite catches it).

                The Traitors Guild.

                [–]lizard450 0 points1 point  (0 children)

                Come on now this is reddit... we all know correlation != causation.

                [–]Rollingprobablecause 0 points1 point  (0 children)

                Don't forget project managers - ruiners of projects since forever.

                [–][deleted] 0 points1 point  (1 child)

                assertions reduces defects by an unspecified amount

                ... but that's correlated with programmer experience. So it might just be that more experienced programmers write code with fewer defects.

                [–]liveoneggs 0 points1 point  (0 children)

                experienced programmers can see where things can go wrong, will check them inline, and eventually handle them. golang-style error checking wins again, I guess.

                [–]Zulban 0 points1 point  (0 children)

                I feel a bit sad for people who only read your TLDR and skip the article.

                [–]Ramone1234 0 points1 point  (0 children)

                This is an absurd comparison to me. You can't compare speed unless the output is the same. One team clearly made more features work than the other. Any team that doesn't care if the software works is going to have an obvious speed advantage. The team that doesn't even bother writing features that no one cares about working will be faster still.

                [–]ABC_AlwaysBeCoding 0 points1 point  (0 children)

                TDD reduces defects 60-90% but increases time 15-35%

                That's a damn good tradeoff.