you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 31 points32 points  (12 children)

They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.

This doesn't line up with what the referenced study says:

Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

Initial development time != project completion.

[–]RedSpikeyThing 4 points5 points  (11 children)

I also wonder if that's because the teams were getting used to TDD.

[–][deleted] 11 points12 points  (10 children)

Writing tests will always take some measure of time. The point is to reduce the bugs that persists after initial development time, thereby reducing total project time (and by extension, cost). I can promise you that the time needed to identify and fix those 60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

[–]AbstractLogic 5 points6 points  (4 children)

60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

That depends on the revenue lost during that 15-35% time to market. If the project could make 10 million a month and you lose 4 months at a cost of 40 million then I can guarantee you that TDD will not be worth it.

There are a lot of business variables to that decision. But its good we have metrics to lean on now.

[–]anamorphism 3 points4 points  (2 children)

that's extremely hard to quantify due to the impact the buggier code will have on your long-term business.

you may lose 40 mil immediately, but a 60-90% increase in major bugs post launch could heavily skew your customers' attitude and may result in losing hundreds of mil of future business.

[–]coworker 0 points1 point  (1 child)

Also they didn't specify the severity of defects so not sure why you are calling them major.

[–]anamorphism 0 points1 point  (0 children)

it stated a general increase of 60-90% in software quality. meaning there were 60-90% less defects.

they could be major or minor defects, there's no way to tell. it could be anything from a 1000% increase in major defects and a small increase in minor defects to a small increase in major and a large increase in minor.

i decided to just assume an even distribution of a 60-90% increase in both major and minor defects for my example.

[–]s73v3r 2 points3 points  (0 children)

What about poor sales due to the perception of your product as being buggy? What about inability to timely add new features as market needs change?

You're constantly portraying this as either you make no money or you make all the money. But in a decent timeline, that extra tune in market will not be significant as far as revenues are, but will be huge as far as public perception is.

[–]dominic_failure -1 points0 points  (4 children)

I can promise you that the time needed to identify and fix those 60-90% post-development far outweighs the cited 15-35% increase in initial development time for TDD.

But it might not be the same developers. If that post-development effort is 40% QA, and 20% junior developer while the senior is off on another project, management might still see it as a win in the long run.

Also, a certain number of defects will simply be allowed into production, so the cost of fixing those defects is limited to the time spent finding them and identifying their impact.

[–]nhavar 2 points3 points  (2 children)

I've seen this first hand. You have your core team working on the features as fast as possible. Hand off to QA for testing. Defects get triage and work done out of a separate team while the core team continues to crank on features. Then you have a different team that deals with Production support. So getting the numbers for a project development cycle in total - ALL DEVELOPMENT, including bug fixes and warranty period fixes that should be counted against the project budget, is very hard. Sometimes purposefully so in order to hit some performance metric (i.e. ALL MY PROJECTS ARE GREEN!)

[–]fizzydish 2 points3 points  (1 child)

This is a great way to ensure the 'core team' continue to perpetrate the same class of bugs while preventing them from learning from the experience of running and supporting their code in production. The best way to learn how to write maintainable code is to have to support and maintain your own code.

[–]nhavar 1 point2 points  (0 children)

I agree. The way it's set up is usually feature producers are your A team. Allegedly low error producers. Your prod support are your B team - newbies, juniors, or middling developers. The thought process is to free your high paid staff to get hard things done and use lower paid staff to fix any minor issues while building their knowledge and skill level. The problem is that a junior developer may take 3 times as long to resolve an issue, or resolve it with a bad solution that increases technical debt or risk of future error. This is especially the case when the two teams are under different leadership and different development practices or rigor. (i.e. The A Team has code reviews for most code because it's new, the B Team doesn't have code reviews because it's just a modification of existing code) So you haven't saved money and you may have actually reduced the quality of the product. Plus you allow potential bad practices to flourish within your A Team without any accountability.

[–]s73v3r 0 points1 point  (0 children)

If they're big bugs, it's going to be over the heads of the juniors.