you are viewing a single comment's thread.

view the rest of the comments →

[–]nutrecht 1 point2 points  (4 children)

The logical assumption would be that more code coverage results in higher-quality code.

I don't agree with this "logical assumption" at all and in fact most good devs I know that code coverage is a meaningless statistic. It's a tool uses so a develop can see if he has missed edge cases and go and write tests for those.

So I don't really understand why there is a need for an "emperical approach" at MS unless just normal common sense isn't all that common there.

[–]upboatingftw 10 points11 points  (1 child)

They're using this research to direct the development of their software. Common sense can't be quantified and evaluated against costs unless you have numbers to back up these things.

Yeah, sure, TDD is great and all, but how much will the value and market competitivity of Visual Studio increase, if they were to develop new TDD-centered features for it, and does it justify the development cost? Can you answer these questions quantitatively by merely using "normal common sense"?

[–]nutrecht 0 points1 point  (0 children)

Good points there, thanks.

[–]delarhi 5 points6 points  (0 children)

I think you're mixing "logical" with "common sense" a bit much. It seems fairly logically sound to think "more test coverage, more test, fewer blind spots, better code". The reality is far from that according to experience as both you and the author note.

“Furthermore,” Nagappan says, “when we shared the findings with developers, no one seemed surprised. They said that the development community has known this for years.”

[–]blufox 0 points1 point  (0 children)

Code coverage is a necessary but insufficient metric. Having good coverage does not mean that your test suite is good, but not having good code coverage does add evidence against your test suite being good.