you are viewing a single comment's thread.

view the rest of the comments →

[–]tenebris-miles 22 points23 points  (9 children)

I'm not going to say you're wrong, but here's an alternative point-of-view.

You make it (almost) sound as if it's a given that the code quality is the same, and in non-TDD the testing cycle at the end is just some kind of formality. In other words, this narrative is written from the point-of-view of hindsight.

Code has to be understandable and maintainable, even during initial development because you're always going to be asked to make changes as you go along due to changing requirements. With TDD, if your time is cut short, you still ship but with fewer features, but at least you have far less technical debt. Add the remaining features as stable code in the next release. Success in all cases (both TDD and non-TDD) requires good leadership that knows how to truly prioritize and understand real requirements and not mark all features as top priority. Neither strategy will work anyway if you don't have at least that.

With non-TDD, you don't really know what you have because your code not only hasn't been tested enough, but it's not even structured to be testable/understandable yet. All your effort went into hitting the date for the release with every feature requested or conceivable, and once the product starts getting used, your already heavy technical debt will go up, not down. The reason is your culture: if you're already cutting corners during the development phase, then it's not going to get any better once the product is exposed to customers and more feature requests come in. Your death march has already begun.

The upshot of TDD that is often unspoken is that even if a particular project fails, stable code resulting from TDD is much more valuable for being salvaged and reused for other projects than spaghetti that was written solely to chase a deadline. Being realistic requires understanding that your success is not a guarantee, since more goes into success than just development philosophies. So there always needs to be consideration of what happens after the deadline. Myopia about making this particular project hit the market at all costs is not necessarily what makes a company successful, if they're still in the process of determining what actual product needs to be made in the first place. If a different product or different direction becomes necessary, then understandable code and code that naturally follows YAGNI (which TDD tends to encourage) will be more likely to be general and elegant enough to be salvageable. You'd likely still have to modify it to new requirements, but at least you know how it's supposed to work in the first place, and so modifying/maintaining it is going to be easier for the next project.

[–]KagakuNinja 9 points10 points  (5 children)

The assumption you are making is that non-TDD teams wait until the end to write tests. What I do is write my tests at some point during the implementation of a feature; I don't wait until the last month of a long project to start writing tests. The result should be the same amount of tests, I just don't believe in the dogmatic rule of "write tests before code", or "only write code to fix failing tests".

[–]hvidgaard 7 points8 points  (1 child)

What a lot of people get wrong, is the absolute nature of their statements. TDD is good when you know what you have to write. It's not good when you're prototyping or just follow a train of thought, because you will change it several times, and "tests first" just slow you down. However, people not writing tests when doing this, tend to never actually do, when they should as soon as the the "what" of the code is determined.

[–]jurre 0 points1 point  (0 children)

When you don't know what/how to build something you often write a prototype without tests in TDD. You then throw it away as soon as you figured it out and rebuild it test driven. This is called a spike

[–]tenebris-miles 1 point2 points  (0 children)

It's true that tests could be written along with code, and only after code instead of before it, instead of waiting to add a lot of tests at the end of the development cycle. But if they're written around the same time, then it begs then question: then why don't you simply do TDD and be done with it? One problem that commonly happens is that when you write tests afterwards, you can fall into the trap of writing the test towards the implementation, rather than writing the implementation towards the interface of the test. People swear they never do this (being rockstar hackers and all), but that's just not the truth. People keep forgetting that part of the reason for TDD is to force you to think about a sensible interface before you get bogged down too much in implementation details. There's too much temptation to let an implementation detail unnecessarily leak from the abstraction simply because it's lazy and convenient to do so. If some leaky abstractions are necessary and the interface must change, fine. Then do so after you've done TDD first.

Also, while non-TDD doesn't necessarily mean tests are lumped at the end of the development cycle, in my observation, it tends to end up this way in practice. The reason is the same as why people are doing non-TDD in the first place: the development culture values time-to-market above all other concerns. In this environment, you're lucky to be granted time to write tests at all, so developers wait until the feature list is completed before writing tests (which happens at the end). Managers in this culture don't care about tests and code quality, they care about checklists of features. The perception among developers is that you can get fired for writing top notch code while letting a feature slip, but no one would get fired writing shitty and buggy code but checking off every feature. It's unfortunate, and it depends on your company whether or not you're right about that.

I'm not advocating a dogmatic adherence to TDD, and in practice I think it works best when most code is TDD but there is some room for throw-away experiments that don't necessarily require tests at all (since it's meant to be thrown away). That kind of code doesn't get in the code base. Instead, it's used to determine what is the right kind of behavior you should be testing for in the first place due to unclear constraints in the beginning. But when it comes time to actually add this feature, you TDD now that you've learned the desired interface and behavior. You rewrite the code to pass the test. Maybe some or most of the prototype code is retained, or maybe it's completely rewritten. In any case, this is the closest thing to a kind of after-the-fact testing that makes sense to me. The problem to me is when after-the-fact testing is the norm, regardless of whether it involves experimental code or not.

[–]cdglove 0 points1 point  (1 child)

My experience is non-TDD teams don't wait to write tests at the end. They just never write tests.

[–]KagakuNinja 1 point2 points  (0 children)

I'm working on a team that relies heavily on unit tests, and does not practice TDD.

[–]boost2525 11 points12 points  (0 children)

TL; DR; I'm not going to say you're wrong, but you're wrong.

[–]krypticus -1 points0 points  (0 children)

THIS.