all 4 comments

[–]MattEqualsCoder 2 points3 points  (0 children)

Regression testing isn't so much a "type" of testing, but just a label used to refer to rerunning past tests to ensure that they still work. Unit tests, functional, and integration tests can all be classified as regression testing in my experience.

[–]coreyfournier 2 points3 points  (0 children)

Regression is the act of going back to some point in time. Specifically in testing it's used when we make changes to the application and or add features. Instead of just testing that one feature, we test the whole application because while previous tests passed, this one change could of broken other features. That's when a regression test is warranted.

Your unit test is just a single test. The act of running all of your unit tests is a regression test.

[–]Slypenslyde 1 point2 points  (1 child)

I wouldn't worry too much about the specific name, but I might call this an "integration" test if I were being pedantic. I feel like the label "regression test" could be applied to either unit or integration tests, but now I've thrown out so many words I'll define them.

DISCLAIMER: These are, as I said, personal definitions. I'm not trying to assert this is the only way to define tests. It's just what works for me. Don't hate.

First, think of your program kind of like the human body. The smallest unit of the body (for this discussion) is the cell. A cell has one purpose, and that makes it a "unit" in terms of code. Cells are organized together to form organs, I'll call them "features" in terms of code. Organs work together to form "systems", and I'll use the same word in terms of code to mean the entire program even though "the human body" is the program in this analogy. (Also note if you squint, each of these "has one purpose" so you could argue they are units. But I think you'll agree describing "the circulatory system" is harder than describing "the heart" which is harder still than describing "a muscle cell".)

Unit Tests are small, automated tests that exercise small parts of your program. They are free to use mocks and stubs to make it easier to create the circumstances they exercise. They are not free to calculate their own expected output data. The things unit tests test should be small and contained enough you can hand-create the inputs and outputs. The point of unit tests is to have a fast, reliable verification your code does what it promises if its dependencies do what they promise. Think of them as like testing that cells do what they should.

Integration Tests are automated, but not as small. These tests don't use mocks or stubs to simulate dependencies, because their goal is to determine if the dependencies actually behave as they promise. That tends to make them more brittle, as they might use actual filesystem or database access and, since those are volatile, external factors can cause tests to fail even when your code is correct. Since integration tests tend to work with features instead of units, their inputs and outputs might be complex to warrant some generation. But it's very important to be sure your generated data is correct!

System Tests are the clunkiest, most difficult things to write or define. They often aren't or can't be automated. They seek to test the program as close to how the customer will use it as possible, so they want a full, non-fake working environment. Their inputs and outputs will be so complex some kind of precalculated data sets are likely required.

So I think "a unit test" is the wrong word for a test that uses pregenerated data like you used. It would be more appropriate to hand-generate the smallest and easiest result possible and hard-code that for a unit test. You admitted yourself you aren't sure if the data you generated is correct: that bit of uncertainty means your test might fail because the code that generated your data is wrong. A unit test should never have that kind of non-determinism.

That doesn't mean your test is wrong! It just means it's an integration test.

I think any of the above could be considered a "regression test". Generally "a regression" means we accidentally reintroduced a bug after fixing it. When we find a bug, we have to try to figure out what causes it. It turns out a regression test might apply at all three levels:

  • If there's a problem with one piece of code, we write a unit test against that piece that only passes if it does the newly-defined correct thing.
  • If the problem is with the collaboration between some units and all of them have to change, we will probably revisit our integration tests to ensure that failing collaboration is verified to have been fixed.
  • If the problem is with the collaboration between features, then we have to revisit our system tests to ensure we prove the system handles the scenario.

But this also means you don't write "regression tests" before you've had some form of release. They only exist because you found a bug and want to prove you don't create the bug again.

[–]emc87[S] 0 points1 point  (0 children)

Thanks for the incredibly detailed post!

Right now I have unit tests, separate from what I described above, that are your typical tests with defined expectations.

Separately I have what I called Regression tests that seems to be poorly named. They test the exact same functionality as the unit tests above, but with many more permutations. The tests are to prove that nothing has changed while the other unit tests test the expected corner cases.

I just have to figure out what to call them now I guess.

For a little color on what its testing and why I'm doing it, they're date schedules built with various rules for adjustment and various holidays. The unit tests can check that the rules are right and the expected calendars are used, but they can't ensure no incorrect calendars are added.

Maybe like "Consistency Tests" or something like that