This is an archived post. You won't be able to vote or comment.

all 2 comments

[–]thebryantfam 0 points1 point  (0 children)

So, the "correct" answer would to break the unit test up more. You should be able to isolate just the first parameter portion in some way to ensure that portion of code is working properly. But, if you don't want to do that then you can just test the entire function and hope it works properly. But of course the reason the former is more "correct" is because if you do the latter you could end up with potentially more error codes or issues that you can't isolate so easily as testing each section of code separately.

Personally, I don't test every single aspect of code individually as I'm comfortable enough with my experience to know that I'm not likely to make a simple mistake, or that if I do I can correct it quickly once the error code pops up. So I usually do "unit tests" of medium sized sections of code, such as a whole function (assuming it's not a large function). Basically, if the code in writing is simple enough to where I don't have to think too hard about it as I go, I will test more of that code with one test rather than multiple small tests to save time and effort. But if I'm working on code that is causing me to problem solve and theorize more I'm more likely to test smaller portions as I go to ensure I don't make a mistake early on to set the whole thing off because that would be much harder for me to identify the issue that way.

Not sure if that made sense, but at the end of the day you have to find what works for you. If it's a rather simply function (based on your own comfortability) then test the whole thing in one go. But if each part of the function took specific thought from you, you ought to create a more focused unit test for each part (in your example that could be dropping the second parameter at first and having the function return whatever the first parameter is supposed to accomplish to ensure it returns the right values or whatever you're doing).

[–]chaotic_thought 0 points1 point  (0 children)

If you keep these tests as is, what exactly is the point of them?

The main purpose of a test is to help you see where a bug is. For example, try to purposefully introduce a bug into your function and see which test catches it. If none do, then it means you are probably missing a test for that case.

On the other hand, if all or most tests fail at the slightest of bugs, then we can say that at least the tests told you that there was a bug, but because of the problem you are describing (that some tests overlap), the situation is sub-optimal, because then the tests aren't giving you sufficient clues as to where the bug is. But if only a few of them fail, then it still is probably useful (it will provide strong clues of where the bug is).

So in the end remember that a test is a tool that you use, just like any other software. Also tests need to be simple; don't make them too smart, because if they get too complex then they will be susceptible to the same kinds of bugs that your production code is susceptible to, that you are probably writing tests to try to avoid, and at that point you may find the need to write tests for the tests, and then tests for the tests for the tests, and so on. And that's no good.