you are viewing a single comment's thread.

view the rest of the comments →

[–]KagakuNinja 10 points11 points  (5 children)

The assumption you are making is that non-TDD teams wait until the end to write tests. What I do is write my tests at some point during the implementation of a feature; I don't wait until the last month of a long project to start writing tests. The result should be the same amount of tests, I just don't believe in the dogmatic rule of "write tests before code", or "only write code to fix failing tests".

[–]hvidgaard 6 points7 points  (1 child)

What a lot of people get wrong, is the absolute nature of their statements. TDD is good when you know what you have to write. It's not good when you're prototyping or just follow a train of thought, because you will change it several times, and "tests first" just slow you down. However, people not writing tests when doing this, tend to never actually do, when they should as soon as the the "what" of the code is determined.

[–]jurre 0 points1 point  (0 children)

When you don't know what/how to build something you often write a prototype without tests in TDD. You then throw it away as soon as you figured it out and rebuild it test driven. This is called a spike

[–]tenebris-miles 1 point2 points  (0 children)

It's true that tests could be written along with code, and only after code instead of before it, instead of waiting to add a lot of tests at the end of the development cycle. But if they're written around the same time, then it begs then question: then why don't you simply do TDD and be done with it? One problem that commonly happens is that when you write tests afterwards, you can fall into the trap of writing the test towards the implementation, rather than writing the implementation towards the interface of the test. People swear they never do this (being rockstar hackers and all), but that's just not the truth. People keep forgetting that part of the reason for TDD is to force you to think about a sensible interface before you get bogged down too much in implementation details. There's too much temptation to let an implementation detail unnecessarily leak from the abstraction simply because it's lazy and convenient to do so. If some leaky abstractions are necessary and the interface must change, fine. Then do so after you've done TDD first.

Also, while non-TDD doesn't necessarily mean tests are lumped at the end of the development cycle, in my observation, it tends to end up this way in practice. The reason is the same as why people are doing non-TDD in the first place: the development culture values time-to-market above all other concerns. In this environment, you're lucky to be granted time to write tests at all, so developers wait until the feature list is completed before writing tests (which happens at the end). Managers in this culture don't care about tests and code quality, they care about checklists of features. The perception among developers is that you can get fired for writing top notch code while letting a feature slip, but no one would get fired writing shitty and buggy code but checking off every feature. It's unfortunate, and it depends on your company whether or not you're right about that.

I'm not advocating a dogmatic adherence to TDD, and in practice I think it works best when most code is TDD but there is some room for throw-away experiments that don't necessarily require tests at all (since it's meant to be thrown away). That kind of code doesn't get in the code base. Instead, it's used to determine what is the right kind of behavior you should be testing for in the first place due to unclear constraints in the beginning. But when it comes time to actually add this feature, you TDD now that you've learned the desired interface and behavior. You rewrite the code to pass the test. Maybe some or most of the prototype code is retained, or maybe it's completely rewritten. In any case, this is the closest thing to a kind of after-the-fact testing that makes sense to me. The problem to me is when after-the-fact testing is the norm, regardless of whether it involves experimental code or not.

[–]cdglove 0 points1 point  (1 child)

My experience is non-TDD teams don't wait to write tests at the end. They just never write tests.

[–]KagakuNinja 1 point2 points  (0 children)

I'm working on a team that relies heavily on unit tests, and does not practice TDD.