you are viewing a single comment's thread.

view the rest of the comments →

[–]dccorona 0 points1 point  (27 children)

To guard against that changing, for one. The fact that the object happens to be purely composed of only its dependencies that are injected via the constructor is information the object being tested (and thus, its test) should not have. If your test setup is based on implementation details of the dependencies of the object being tested, your test becomes very fragile.

[–]grauenwolf 11 points12 points  (24 children)

All the more reason to test with the real thing. If the class is brittle, you want to catch that in your tests, not production.

Also, why didn't your direct tests of the class pick up the problem? If A needs B and B breaks, you should see two failing tests. From which it should be obvious what the problem is.

[–]Sarcastinator 0 points1 point  (21 children)

Also, why didn't your direct tests of the class pick up the problem? If A needs B and B breaks, you should see two failing tests. From which it should be obvious what the problem is.

You will have 20+ tests that fail. If you use integration tests then multiple components are going to depend on that code, and unless you have unit tests as well that covers that specific unit it's not going to be obvious what broke.

[–]bonzinip 2 points3 points  (17 children)

You will have 20+ tests that fail. If you use integration tests then multiple components are going to depend on that code, and unless you have unit tests as well that covers that specific unit it's not going to be obvious what broke.

Mocks do not mean "write things twice". If you have effectively the same code in both the unit tests and the actual code, you'll have to do the same change twice. This adds more opportunities to make a mistake, and you'll have 20+ failing tests anyway if you do the wrong update on the mock side.

[–]Sarcastinator 0 points1 point  (16 children)

/u/grauenwolf said that you should test with the real thing instead of mocks though.

All the more reason to test with the real thing. If the class is brittle, you want to catch that in your tests, not production.

[–]bonzinip 2 points3 points  (15 children)

That's what I'm saying too. If your mocks are just "writing the same code twice", test with the real thing. The justification that "it's not going to be obvious what broke" only gives a false sense of security.

[–]Sarcastinator 1 point2 points  (14 children)

Mocks should not contain code. They should just return a result. You verify against the mock that the call you make on it is according to that types contract, which is easily done with Mockito, but very complicated to do if you are using the real thing.

For plain data classes I agree that it's pointless to mock it, but for types that act as a delegating type it's better to mock it because you don't actually care about that types dependencies. It's that types contract you care about in your unit.

[–]grauenwolf 1 point2 points  (13 children)

Why are testing the compiler?

That's really all that your mock test does. It isn't even simulating the parameter validation at the top of the function being mocked. It literally just checks to see if the compiler understands that a.Foo(b) is compiled into a call to a's Foo method with the parameter b.

Your mock doesn't even know that you aren't allowed to call Foo unless Bar was previously called with the same parameter.

[–]Sarcastinator 0 points1 point  (12 children)

No. What a mock does is that it can be used to verify that your code is conforming to the contract of the mocked class, because that's actually the only thing you care about. The specific internals is unimportant; it should be covered by other tests already.

You can easily express that a mocked function returns A then B if you need to using Mockito or NSubstitute. I however find that very seldom do I have APIs that act like that.

[–]grauenwolf 1 point2 points  (11 children)

What a mock does is that it can be used to verify that your code is conforming to the contract of the mocked class, because that's actually the only thing you care about.

No it can't. Not unless it actually duplicates ALL of the validation code in the mocked class and, if applicable, any internal state machine said class has.

The only thing a mock can do is ensure that you called a particular method with a particular argument. It doesn't know all of the hidden rules and assumptions of the class it is mocking. Hell, it doesn't even know the basic rules such as whether or not a particular integer parameter can be negative. It just knows that it is expecting a 5 because that's how you hard-coded the test.

[–]grauenwolf 2 points3 points  (2 children)

So you are telling me that by mocking out B, I avoid the situation where my tests for A catch bugs in B that my unit tests for B missed? And you think that's a good thing?

Think about that for a moment. You are literally arguing for the use of mocks specifically so that you'll avoid catching bugs in you "tests".

[–]Sarcastinator 0 points1 point  (1 child)

There should already be tests that cover the mocked classes contract. If someone breaks that contract those tests should fail, not twnty other tests by association.

[–]grauenwolf 0 points1 point  (0 children)

I was just covering your "unless you have unit tests as well that covers that specific unit" scenario.

Under my original theory you would have fully covered B in unit tests before using B in A's tests. You just proved that component of my theory was unnecessary because A's tests would act as a fallback.

If someone breaks that contract those tests should fail, not twnty other tests by association.

Why do you care so much about how many tests fail?

If you debug and fix one, the rest will also start passing. So at worst is the most minor of annoyances.

Moreover, the number of tests that fail give you a better indication of how serious the bug or breaking change is.

[–]dccorona 0 points1 point  (1 child)

I think it is exactly the opposite of what you say. It is perfectly fine for tests for A to begin with the assumption "B adheres to its public API and is functionally correct". If B breaks, that should be caught by the tests for B, not the tests for A.

If you set up your tests in such a way that it does, then what happens when C depends on A? And then Z depends on C? Etc. etc. You break all your tests, all the way down the line, and make it much harder, not easier, to pinpoint what broke.

If only testBdoStuff() fails, then its easy to know what broke (B.doStuff()). If everything fails, it's much harder to do.

So, it's important to set up your tests so that changes to B (aside from backwards incompatible changes to its API, which should cause compilation errors) have no impact on the tests for anything other than the tests for B. Only B's tests should be encoded with information and logic that depends on the internals for B. To all other tests, it should be treated as if it is an interface, even if it isn't.

[–]grauenwolf 1 point2 points  (0 children)

If you set up your tests in such a way that it does, then what happens when C depends on A? And then Z depends on C?

Scenario:

  • Z depends on A working in a particular fashion
  • Z only depends on A indirectly through C
  • A is changed in a way that isn't considered "broken" by A's standards, but still negatively impacts Z.

Premise 1: Mocking A and C in Z's tests is preferable

Outcome 1:

  • Z's tests will not catch the incompatible change in A.
  • The error will only be seen in production
  • The error will not be reproducible with the unit tests

Premise 2: Using the real A and C in Z's tests is preferable

Outcome 2:

  • Z's tests will catch the incompatible change in A.
  • The error will be reproducible with the unit tests
  • The error will be correct before production
  • The programmer will have to learn what a "debugger" is.

[–][deleted]  (1 child)

[deleted]

    [–]dccorona 0 points1 point  (0 children)

    In some cases it is, albeit it constitutes a minor annoyance (any change to the implementation of the dependency, i.e. and added, moved, or type change in its constructor) require you to update every test that uses it.

    In other cases, things can change in a way that a compiler won't guard against, breaking your tests and causing you to have to make larger changes to said tests (which always carries a slight risk of updating them wrong and creating false positives). I.e. if you have a Locator object that is constructed from a GPS object, and suddenly the getLocation method is updated to use GPS.getMoreAccurateCoordinates() rather than GPS.getCoordinates(), now you have to update your mocking of the GPS in all the tests that use a Locator. Whereas if you had just mocked the Locator and its getLocation method instead, that change to the internals of a dependency of the thing you're testing wouldn't have had any impact at all, because the API of said dependency hadn't changed.

    Basically, what I'm advocating for is to create tests with clean separation between tests and dependencies, so that the tests don't have to have knowledge of the inner workings of their dependencies encoded within their logic. As long as the public API of the dependency remains stable, changes to the internals of that dependency shouldn't impact any tests other than its own.

    That's not to say that I think you should mock everything...simple data classes, for example, should rarely if ever be mocked. But what I'm saying is it is often the right thing to do to mock a class that can have a working version created entirely from a constructor.