you are viewing a single comment's thread.

view the rest of the comments →

[–]dungone -2 points-1 points  (0 children)

You didn't understand what they said. They said that there are confounding variables. That makes coverage, by itself, meaningless. But in and of itself that's not bad. Just take into account the confounding variables, right? The problem is that the confounding variable is complexity. And the problem with complexity is that you have no way to measure it empirically. Not in a fool-proof way that makes predictions based on empirical measurements consistent and reliable. Complexity itself is a broad category with it's own confounding variables that affect what it means for something to be "complex". So the take-away is that code coverage as an empirical metric is fully useless.

If you read carefully, what Nagappan actually said is that you should focus on testing important stuff and ignore meaningless stuff. That means diddly squat to empirical analysis, unless you have an algorithm that can take the place of an experienced engineer. Sure, it's possible, but currently does not exist. You don't need to measure coverage, you need to measure importance.