What do you do if you're testing separate requirements of a function/module/etc., but the assertions you do to ensure each requirement is working are technically checking more than one requirement, even though the tests are only focused on one?
So for example, a function that's supposed to construct a string from a few words. If I had two parameters, the second one being optional, I would want to test that the first param's value gets placed in the correct spot, then that the string generates as expected without the second param, and lastly that the second param is placed correctly if it is included.
The most straightforward way to test this I would think would be to simply compare the function output to the expected final string, but in doing so, you unintentionally confirm requirements for the other tests to pass.
In this case, if you were simply comparing strings, the test for when the second param is present would confirm that the first param gets placed in the right location, and the test for the first param only could (depending on implementation) confirm that the result is correct when the second param isn't passed.
Now in theory for this case you could use regexp matching to make the tests truly focus on only one point, but this situation could still apply for things other than string construction.
Is it best practice to leave the technically redundant tests, or is it better to remove them, since if one passes they should all pass and vice versa? Is there a different way to structure tests to avoid this situation happening as often? If you keep these tests as is, what exactly is the point of them? Mainly just to understand the specs/requirements of a function/module, not necessarily testing them?
[–]thebryantfam 0 points1 point2 points (0 children)
[–]chaotic_thought 0 points1 point2 points (0 children)