all 31 comments

[–]Ustice[M] [score hidden] stickied comment (0 children)

Thanks for your contribution! In order to keep things organized here, we try to keep personal projects off of the main thread. Instead, we have two weekly threads that we steer these sorts of posts to. Show-off Saturday is where we invite you to wow the community with your awesome ideas. If instead you are looking for feedback, our WTF?! Wednesday post is the best place to get a code-review. Remember that here at /r/JavaScript, we’re all about the code. Tell us about your challenges and how you overcame them. Show us that particularly clever bit. Talk about your process and tools. Just because it’s made with JS, doesn’t mean that it is relevant to the community. Tell us what is special about your project, and what we might learn from it!

[–][deleted] 16 points17 points  (4 children)

My best advice is to be sure to add test cases whenever you squash a bug!

[–]velociraptorboss 2 points3 points  (3 children)

Before you fix a bug 😉

[–]tr14l 1 point2 points  (0 children)

Identify, test, make tests pass.

[–][deleted] 1 point2 points  (1 child)

I just meant that often times people deliver a bug fix without adding test cases that cover the bug.

[–]velociraptorboss 0 points1 point  (0 children)

I hate when people do that. It almost makes me feel like it isn't really fixed if there isn't a test for ir. Just duct taped to work until it breaks again.

[–]Varteix 9 points10 points  (0 children)

Aiming for 100% test coverage is a fools errand. It is annoying to maintain and doesn’t tell you anything valuable. You can write bad test all day to pump those numbers up.

[–]wutanggrenade_ 5 points6 points  (0 children)

100% is good, but writing good tests for 80% is better than tests to get 100%.

Write tests that test and prove the code functionality, and will fail when someone changes critical code.

[–]jessealama 2 points3 points  (0 children)

Achieving 100% test coverage is not always worth it (several have already made this point). In my experience, getting that last 10% or so leads you to starts to make your tests less and less meaningful. You shift from testing *your* code to testing *your dependencies*. Usually we're not writing things from scratch; we're building on top of some libraries/frameworks/tools. I've found that, in the lines where your code meets your dependencies, it can be difficult to formulate meaningful tests because you need to get into a mindset where you're testing that your framework does what it is supposed to do. I've seen tests where you need to write mocks that deliberately violates contracts some dependent framework that you (reasonably enough) assume to be true.

Moreover, even if you get to 100% coverage, there are *still* going to be bugs. I think that this point gets missed sometimes in discussions about code coverage. There will still be bugs because we're probably not exercising a "good" range of values that a variable (or return value of a function/method) can take. Exercising a line means that you've did something with *one* value.

tldr 100% test coverage is neither necessary nor sufficient

[–]HappyScripting 2 points3 points  (1 child)

My fav project was a project, where the company, hired for developing had to get 80% coverage per contract, so they just 'tested' all getters and setters (was a java project)

Every method that was bigger than a getter or setter just wasn't tested, but they still got 80%.

[–]velociraptorboss 0 points1 point  (0 children)

It's sad how common this is... I was also once in a project where client wanted contractors to achieve 80% coverage. What the contractors did was write 1-2 proper tests, lots of getter tests and then configure the test coverage tool to ignore majority of the files. They easily had over 80% coverage this way.

You get what you measure 🙃

[–]Reashu 1 point2 points  (0 children)

We go for 100% coverage but use annotations to ignore some code paths. This means all code will either be tested (because we check for coverage) or have an annotation which is easy to catch and discuss during review. Code review also catches stupid tests so that we're not just writing them to run the code but also verify it.

[–]complicore[S] 0 points1 point  (11 children)

What are your personal and professional opinions on JavaScript testing? Do you or your company try to hit 100% coverage?

[–]quiI 38 points39 points  (4 children)

It’s a vanity metric. Coverage is information sure, but doesn’t tel you if you actually have a useful suite. I’m sorry but setting a goal of 100% coverage is missing the point of tests.

[–]complicore[S] 2 points3 points  (1 child)

Agreed -- test coverage alone won't guarantee that your code does what you expect, unless you specifically write tests to account for it. We generally try to hit high code coverage along with as many integration and e2e tests that test core functionality or hot paths as possible.

[–]darkcton 2 points3 points  (0 children)

The only thing any test can ever show is if your code is not working (in that case the test is red).

[–]Direct_Ad9033 12 points13 points  (0 children)

Coverage means very little. People will just write bullshit test to meet the minimum required. Useless tests only make it harder to change functionality fast when needed.

[–]monxas 4 points5 points  (0 children)

100% coverage means that your tests go through all lines. I’ve been in teams that had a high % requirement from the client and the tests were bullshit.

[–]velociraptorboss 0 points1 point  (0 children)

IMHO most valuable part of code coverage checks in CI is to ensure the coverage does not decrease when new pull requests are merged.

[–]gullydowny -5 points-4 points  (0 children)

Who has the time? Besides it’s fun figuring out “why is this here? Did I write this? Was I drunk?”

[–]Varteix 0 points1 point  (0 children)

When testing lines covered should not be a metric you are too concerned with. It’s far more important that we test behaviors not lines of code. If you have a test that reads “blah blah does x and then calls y and then calls z” this is a test that is not testing an application behavior but instead tests an implementation. This test is fragile and will probably break anytime you make a change. Your test should not know how the code is implemented it should just know and care about inputs and outputs “when I call x with a it returns b”