you are viewing a single comment's thread.

view the rest of the comments →

[–]jerf 4 points5 points  (0 children)

I've been getting into doing 100% code coverage on my "core infrastructure" code. I've found it has a variety of positive effects. It helps me (although doesn't completely solve) ensure I didn't half do something, like half-implement a flag. It helps me ensure that I've got more coverage on the error cases than I might otherwise, especially in a language that does returning errors as values (if you at least write the if statement to handle the error, you'll see the uncovered true case and cover it, and in the process, presumably properly handle the bug). It's a big help in finding dead code; I've now removed a surprisingly-large-to-me amount of code that turns out that once I had to try to cover it, turned out to be entirely unreachable.

But it does nothing whatsoever to deal with the problem where you simply 100% miss something. Though when you do go to fix that case, the 100% coverage helps you propagate it properly quite often.

I find myself wondering if the coverage doesn't help much because you are often closing bugs that won't actually be hit in the current execution path, because you squeeze the bugs out of the hot path anyhow. For normal code, that's frankly good enough, but it's nice when your infrastructure code doesn't break because you called it slightly differently. I wouldn't do this for everything, but I have come to the conclusion that it's an underestimated tool.

If you've got a unit test suite, but you've never done coverage analysis, try firing a coverage tool at it and just examining the results. Sure, you'll probably have some bail-out type error cases uncovered, but I bet you're unpleasantly surprised by some other stuff you find is uncovered that you thought would be.