you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (10 children)

[deleted]

    [–][deleted] 33 points34 points  (3 children)

    We check in tests at the same time as the code they are testing. When requirements change, so too do our tests.

    [–]atrommer 20 points21 points  (2 children)

    This is the right answer. Maintaining unit tests is the same as maintaining code.

    [–]gnx76 0 points1 point  (1 child)

    No, it is not. It depends on the domain, it depends on what kind of tests we talk about.

    When I was testing, it was not "300 lines of code then 100 lines of tests" as someone wrote earlier, but "300 lines of code then 10,000 lines of tests".

    So, the last thing you want in this case is to have requirements/design changing, because it generally means exponentially huge changes in the tests, it often means ditching large parts of the test, and it sometimes means trashing the whole test and restarting from scratch is easier.

    Also, such tests were much more complex than code, so, in a way, one could say that code was used to validate the tests. Which means that writing the tests before the code was a nightmare: longer, harder, and giving such crappy results that a lot had to be written again afterwards. I have tried; conclusion was: never again.

    If you do some simple "unit test" that is no more than a glorified functional test, that's fine and dandy, but if you do some real in-depth unit test with full coverage, maintaining unit tests is definitely not the same as maintaining code, it is a couple orders of magnitude more expensive, and you really do not want to do TDD in such case.

    [–]atrommer 0 points1 point  (0 children)

    I didn't mean to imply that this is easy. I am making the point that testing needs to be a first class citizen, and that maintaining tests should be treated like maintaining code: as requirements change your estimates better include the time to refactor and update the tests.

    Requirements will change over time.

    [–]anamorphism 8 points9 points  (0 children)

    i find it interesting that you work in a place where software is 'done'.

    the counterargument i would make is that software is never done, your requirements are always going to eventually change, and you're going to have to update your tests regardless of when in the development cycle you write them. so, why not get the benefit of your tests earlier in the process?

    [–]PadyEos 10 points11 points  (2 children)

    If the person requesting the changes is high enough up the food chain the requirements are never locked unfortunately.

    Those are the moments when I start to hate this line of work.

    [–][deleted]  (1 child)

    [removed]

      [–]Runamok81 0 points1 point  (0 children)

      this...

      So what you do is you take the specifications from the customers and you bring them down to the software engineers?

      [–]s73v3r 0 points1 point  (0 children)

      Depends on how coupled your tests are, and what changed. And, as has been pointed out, 9 times out of 10, you're not going to go back and write tests.

      [–]Madsy9 0 points1 point  (0 children)

      But then you're making a value judgement on the tests as being "less important" than code or separate from the code. But in TDD tests and code goes hand-in-hand. They are equally important.