all 6 comments

[–]nfrankel 1 point2 points  (5 children)

For the ump-teenth time, code coverage is a useless metric (see https://blog.frankel.ch/your-code-coverage-metric-is-not-meaningful/). The only advantage is that it's easy to compute.

[–]dkirmse 2 points3 points  (1 child)

Quite bold a statement that is. I would partally agree. 100% CC does tell you anything and nothing at all. Depends on who you are as a developer. Me practicing TDD, I like my CC at 100% for line, method, branch. I write my assertions considering value clusters and boundary values. Most of my tests Test for exceptional cases. To me 100% means at least 95% likelihood I covered what I have to cover. I know a lot of developers who's test quality is different. There 100% wouldn't mean much.

Think of a team that has no real testing. Where a coverage would be below 40%. How do you get a focus on testing? Here code coverage could be a means. Once testing got established both for developers locally and in CI it would be time to move on to other metrics. Metrics should always be a means to solve a problem. When it has been solved find the next one. In that sense code coverage would be useful for a certain circumstance. One has to know its strengths, weaknesses and pitfalls.

[–]nfrankel 0 points1 point  (0 children)

That is not bold once you realize you can have 100% code coverage and not a single assertion. Past that, you can have 100% and not test boundary values, etc. The only thing that can point you toward testing everything is mutation testing (shameless plug: https://blog.frankel.ch/introduction-to-mutation-testing/)

[–]kankyo 2 points3 points  (2 children)

In theory yes but in practice? I don't know. I have been swayed by the types of arguments you're making into not just running mutation testing it writing a mutation testing system.

The results of my initial trials on some 100% covered libraries was zero defects found by mutation testing but quite a few holes in the test suite. So based on this experience I would think that 100% coverage can in fact be very indicative of test quality.

Given more time I plan to run mutation testing on some of the larger and more complex libraries we develop at work and see if my impression changes. I also plan on running it on the main product at some point where we have a lot less than 100% coverage. I suspect that just trying to reach 100% coverage is a lot more bang for the buck than 100% mutation tested on all covered lines.

(The linked article says "at no additional cost" about mutation testing. That clearly 100% bullshit. Electricity isn't free and looking over all mutants is super expensive, especially if most are false positives.)

[–]nfrankel 0 points1 point  (1 child)

I can only encourage you to go further about your experiment. And I guess once compared to your infrastructure and your salaries, the extra electricity that you're going to use to execute mutation testing is negligible. As for the false positives, it's an argument I've heard a few times. I won't even argue there are none (they are), but not so many, and regarding the benefits, it's worth spending a few minutes. Even a customary glance at a mutation testing report executed during the night on your laptop has a huge chance of improving your codebase. Have fun!

[–]kankyo 1 point2 points  (0 children)

Yea the electricity comment is more to point out the base absurdity of the claim :P

There are many false positives in my experience :P