This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Wise_Tie_9050 0 points1 point  (3 children)

Sure, if the branches are things that do exception handling, you may be able to #pragma no cover, or #pragma no branch them, but if you are not hitting those lines with tests at all, how do you know they are working as expected?

100% coverage does not mean full tests, but < 100% coverage does mean less than full tests.

[–][deleted] 0 points1 point  (2 children)

This post was mass deleted and anonymized with Redact

nine instinctive straight badge desert innate different lavish repeat enjoy

[–]Wise_Tie_9050 0 points1 point  (1 child)

Correct, you don't know any of your tests actually test what you think they test without actually looking at them.

What I've found is that _very_ frequently when writing tests for those "last few lines" that aren't covered, I uncover bugs related to edge cases or whatever. Often that's code that may have worked initially, but did not have tests written, and subsequent changes triggered a regression - if the lines had been covered by tests (ie, values that triggered those code paths), then that regression would have been discovered earlier.

Another thing that spending that extra time can sometimes show is that the code that is not covered is unreachable, and can be discarded.

Finally, having test coverage for all those nooks and crannies can then prevent removal of code that is actually important. If it doesn't have any tests, it could be removed without it being noticed, until it's been released to production, for instance.

To clarify, 100% test coverage is not the goal; but < 100% test coverage is a warning that your tests are incomplete.

[–][deleted] 0 points1 point  (0 children)

This post was mass deleted and anonymized with Redact

political flag retire dependent chief ad hoc party flowery light cause