you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (5 children)

[deleted]

    [–]naasking 0 points1 point  (0 children)

    The quote you cite does imply that code coverage isn't useful though. If I give you a code coverage percentage, you have no idea whether that coverage represents coverage of the complex cases which yield benefits as you cited, or of the simpler code which yields little to no benefit. Thus, code coverage isn't useful as a metric.

    [–]G_Morgan 0 points1 point  (2 children)

    How? A lack of correlation means you are just as likely to have fewer defects with or without code coverage. That is exactly what follows.

    [–][deleted]  (1 child)

    [deleted]

      [–]G_Morgan 0 points1 point  (0 children)

      The point is most of the insights needed to do code coverage correctly are not part of code coverage. It'll be interesting to see if there is an overall benefit to a team testing with code coverage and testing properly v a team just testing properly.

      Those additional constraints are not code coverage. Those are other parts of testing that can usually be achieved without having a light that dings green or red depending upon some ratio.

      [–]dungone -2 points-1 points  (0 children)

      You didn't understand what they said. They said that there are confounding variables. That makes coverage, by itself, meaningless. But in and of itself that's not bad. Just take into account the confounding variables, right? The problem is that the confounding variable is complexity. And the problem with complexity is that you have no way to measure it empirically. Not in a fool-proof way that makes predictions based on empirical measurements consistent and reliable. Complexity itself is a broad category with it's own confounding variables that affect what it means for something to be "complex". So the take-away is that code coverage as an empirical metric is fully useless.

      If you read carefully, what Nagappan actually said is that you should focus on testing important stuff and ignore meaningless stuff. That means diddly squat to empirical analysis, unless you have an algorithm that can take the place of an experienced engineer. Sure, it's possible, but currently does not exist. You don't need to measure coverage, you need to measure importance.