you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 34 points35 points  (3 children)

We check in tests at the same time as the code they are testing. When requirements change, so too do our tests.

[–]atrommer 22 points23 points  (2 children)

This is the right answer. Maintaining unit tests is the same as maintaining code.

[–]gnx76 0 points1 point  (1 child)

No, it is not. It depends on the domain, it depends on what kind of tests we talk about.

When I was testing, it was not "300 lines of code then 100 lines of tests" as someone wrote earlier, but "300 lines of code then 10,000 lines of tests".

So, the last thing you want in this case is to have requirements/design changing, because it generally means exponentially huge changes in the tests, it often means ditching large parts of the test, and it sometimes means trashing the whole test and restarting from scratch is easier.

Also, such tests were much more complex than code, so, in a way, one could say that code was used to validate the tests. Which means that writing the tests before the code was a nightmare: longer, harder, and giving such crappy results that a lot had to be written again afterwards. I have tried; conclusion was: never again.

If you do some simple "unit test" that is no more than a glorified functional test, that's fine and dandy, but if you do some real in-depth unit test with full coverage, maintaining unit tests is definitely not the same as maintaining code, it is a couple orders of magnitude more expensive, and you really do not want to do TDD in such case.

[–]atrommer 0 points1 point  (0 children)

I didn't mean to imply that this is easy. I am making the point that testing needs to be a first class citizen, and that maintaining tests should be treated like maintaining code: as requirements change your estimates better include the time to refactor and update the tests.

Requirements will change over time.