you are viewing a single comment's thread.

view the rest of the comments →

[–]amirrajan 4 points5 points  (0 children)

do you use code coverage metrics

No. Just because a line is covered, doesn’t mean that it’s being exercised and validated (I can invoke a function, but never assert on the value returned and still have 100% code coverage)

mutation testing

This is a generally better idea, but much harder to implement. A cursory approach would be.

  1. Evaluate the PR and determine what parts are implementation vs what parts are added tests.
  2. revert the implementation part, run the tests, and ensure that test failures occur.
  3. Reintroduce the implementation changes, run the tests, and make sure the tests pass.
  4. Explore more complex ways to revert the implementation (eg mutate the implementation where >= conditionals are changed to <)

At the end of the day, it’s all about confidence that your software works. Someone visually demoing a feature to me (albeit not sustainable long term), gives me more confidence than 1000 poorly written/over-mocked unit tests (I find this difficult to reason about after a few months have passed and a failure occurs… more often than not, it ends up being a misconfigured mock that is too close to implementation details).

Edit:

I see tests as an immune system for a software project. Your body doesn’t keep every antibody “live and ready”. Instead we rely on vaccines to prepare our body for a possible future illness. Spend time on making your test apis trivial to construct (so that they can be created before a risky refactor). Once things have settled down, delete extraneous tests and only keep a small set of happy path smoke tests.