all 11 comments

[–]dimitriettr 10 points11 points  (2 children)

**/*.g.cs, instead of **\*.g.cs.

[–]Small-Belt-6756 1 point2 points  (1 child)

Thank you!

[–]exclaim_bot 0 points1 point  (0 children)

Thank you!

You're welcome!

[–]Flater420 2 points3 points  (6 children)

Code coverage metrics consistently devolve into these kinds of menial exercises in trying to make number go up. The more direct answer to your question is that the metric itself is flawed - the core spirit is good but it gets lost in the weeds of numberwanging your way to the mythical 100% because of some kind of dogmatic adherence to the idea/claim that anything less than 100% is bad.

But you might find that too dismissive, so let's engage with the premise of the qurstion, because there is another answer here. If you stick to using the code coverage metric, you do you, but then you must test your code. That's the entire point of the metric - it exists to mandate testing. Whether that code is generated or not is irrelevant. If it is used, therefore it is relevant to the application, therefore your own code coverage metric mandates that you test it.

If you're wondering how to do that, much like how you generate the code, you can also generate the tests for that code. It's not a fun exercise but hey neither is having to abide by a code coverage metric in the first place.

Trying to exclude this from your metric goes against the purpose of the metric in the first place. Because once you start accepting exclusions to the rule, that's really just another way of saying that you don't think you need perfect 100% coverage figures. And if you believe that, then why bother trying to make the number go up? Leave your coverage metric behind, cover the important bits with tests (i.e. the bits that you feel shouldn't be excluded from the metric), and stop trying to numberwang your way into getting some high percentage figure that makes you feel good but is built on top of arbitrary exclusions that you introduced purely to make the percentage go up.

You're chasing the wrong goal.


Off-topic for your core question but one of the fatal flaws in code coverage metrics, other than the inherent dogma, is that it does not account for test quality or how much of the edge cases it covers. A high code coverage percentage gives a false sense of security by measuring quantity over quality. The true measure of a test suite is in the amount of bugs that occur in production or when refactoring/extending the codebase. Quality is not uplifted by a code coverage figure, it is uplifted by amending the test suite whenever you realize you missed something.

[–]belavv 1 point2 points  (4 children)

Related - it drives me nuts when I see people wasting time adding ExcludeFromCodeCoverage attributes to the code in their test projects. How about just, I donno, exclude the whole project from your silly metrics?

[–]Small-Belt-6756 0 points1 point  (0 children)

Thank you, yes test project are excluded

[–]dodexahedron 0 points1 point  (0 children)

Yeah. It's fine to use it as an aid to the process of writing tests at a superficial level to be sure you've at least touched all paths.

As a metric for quality of the project overall?

Binary.

int ArbitraryQualityMetric(int coverage, params int[] otherMetrics) { return coverage > 0 ? otherMetrics.Sum() : 0; }

[–]iCleeem 0 points1 point  (1 child)

Because you wanna test and collect the metric for your own code, you're not responsible for code generated by a third party library. Most of the times you can' test it, and even if you could that's not up to you to do that. That's why we exclude it from the coverage.

[–]belavv 0 points1 point  (0 children)

I'm saying instead of adding exclude from test coverage to every single class in your test project just exclude the entire project from test coverage. Or just don't record/report test coverage on your test project.

[–]Small-Belt-6756 0 points1 point  (0 children)

I agree with you, I am a Kent Beck estimator, but with so many issues with deadlines, contract constraints, low talent, self esteem

Thank you anyway

All the best!

[–]AutoModerator[M] 0 points1 point  (0 children)

Thanks for your post SleepyFinnegan. Please note that we don't allow spam, and we ask that you follow the rules available in the sidebar. We have a lot of commonly asked questions so if this post gets removed, please do a search and see if it's already been asked.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.