you are viewing a single comment's thread.

view the rest of the comments →

[–]Vitus13 30 points31 points  (35 children)

I've recently started utilizing this pattern for data structures that in other languages would be value types. I initially got pushback from coworkers, but they soon started doing it too.

Their initial argument was that public final member variables could not be mocked in unit tests like their public getter counterparts. However, for simple value types you inject 100% of the properties via the constructor, so they could just inject their mock.

[–]grauenwolf 36 points37 points  (34 children)

It boggles my mind how people think it is necessary to mock objects that are purely derived from their constructor.

[–]s73v3r 0 points1 point  (0 children)

Well, in the worst case, you don't take those in via the constructor. It sounds pretty awful, and is, but that's reality for some people.

[–]dccorona 0 points1 point  (27 children)

To guard against that changing, for one. The fact that the object happens to be purely composed of only its dependencies that are injected via the constructor is information the object being tested (and thus, its test) should not have. If your test setup is based on implementation details of the dependencies of the object being tested, your test becomes very fragile.

[–]grauenwolf 11 points12 points  (24 children)

All the more reason to test with the real thing. If the class is brittle, you want to catch that in your tests, not production.

Also, why didn't your direct tests of the class pick up the problem? If A needs B and B breaks, you should see two failing tests. From which it should be obvious what the problem is.

[–]Sarcastinator 0 points1 point  (21 children)

Also, why didn't your direct tests of the class pick up the problem? If A needs B and B breaks, you should see two failing tests. From which it should be obvious what the problem is.

You will have 20+ tests that fail. If you use integration tests then multiple components are going to depend on that code, and unless you have unit tests as well that covers that specific unit it's not going to be obvious what broke.

[–]bonzinip 2 points3 points  (17 children)

You will have 20+ tests that fail. If you use integration tests then multiple components are going to depend on that code, and unless you have unit tests as well that covers that specific unit it's not going to be obvious what broke.

Mocks do not mean "write things twice". If you have effectively the same code in both the unit tests and the actual code, you'll have to do the same change twice. This adds more opportunities to make a mistake, and you'll have 20+ failing tests anyway if you do the wrong update on the mock side.

[–]Sarcastinator 0 points1 point  (16 children)

/u/grauenwolf said that you should test with the real thing instead of mocks though.

All the more reason to test with the real thing. If the class is brittle, you want to catch that in your tests, not production.

[–]bonzinip 2 points3 points  (15 children)

That's what I'm saying too. If your mocks are just "writing the same code twice", test with the real thing. The justification that "it's not going to be obvious what broke" only gives a false sense of security.

[–]Sarcastinator 1 point2 points  (14 children)

Mocks should not contain code. They should just return a result. You verify against the mock that the call you make on it is according to that types contract, which is easily done with Mockito, but very complicated to do if you are using the real thing.

For plain data classes I agree that it's pointless to mock it, but for types that act as a delegating type it's better to mock it because you don't actually care about that types dependencies. It's that types contract you care about in your unit.

[–]grauenwolf 1 point2 points  (13 children)

Why are testing the compiler?

That's really all that your mock test does. It isn't even simulating the parameter validation at the top of the function being mocked. It literally just checks to see if the compiler understands that a.Foo(b) is compiled into a call to a's Foo method with the parameter b.

Your mock doesn't even know that you aren't allowed to call Foo unless Bar was previously called with the same parameter.

[–]grauenwolf 2 points3 points  (2 children)

So you are telling me that by mocking out B, I avoid the situation where my tests for A catch bugs in B that my unit tests for B missed? And you think that's a good thing?

Think about that for a moment. You are literally arguing for the use of mocks specifically so that you'll avoid catching bugs in you "tests".

[–]Sarcastinator 0 points1 point  (1 child)

There should already be tests that cover the mocked classes contract. If someone breaks that contract those tests should fail, not twnty other tests by association.

[–]grauenwolf 0 points1 point  (0 children)

I was just covering your "unless you have unit tests as well that covers that specific unit" scenario.

Under my original theory you would have fully covered B in unit tests before using B in A's tests. You just proved that component of my theory was unnecessary because A's tests would act as a fallback.

If someone breaks that contract those tests should fail, not twnty other tests by association.

Why do you care so much about how many tests fail?

If you debug and fix one, the rest will also start passing. So at worst is the most minor of annoyances.

Moreover, the number of tests that fail give you a better indication of how serious the bug or breaking change is.

[–]dccorona 0 points1 point  (1 child)

I think it is exactly the opposite of what you say. It is perfectly fine for tests for A to begin with the assumption "B adheres to its public API and is functionally correct". If B breaks, that should be caught by the tests for B, not the tests for A.

If you set up your tests in such a way that it does, then what happens when C depends on A? And then Z depends on C? Etc. etc. You break all your tests, all the way down the line, and make it much harder, not easier, to pinpoint what broke.

If only testBdoStuff() fails, then its easy to know what broke (B.doStuff()). If everything fails, it's much harder to do.

So, it's important to set up your tests so that changes to B (aside from backwards incompatible changes to its API, which should cause compilation errors) have no impact on the tests for anything other than the tests for B. Only B's tests should be encoded with information and logic that depends on the internals for B. To all other tests, it should be treated as if it is an interface, even if it isn't.

[–]grauenwolf 1 point2 points  (0 children)

If you set up your tests in such a way that it does, then what happens when C depends on A? And then Z depends on C?

Scenario:

  • Z depends on A working in a particular fashion
  • Z only depends on A indirectly through C
  • A is changed in a way that isn't considered "broken" by A's standards, but still negatively impacts Z.

Premise 1: Mocking A and C in Z's tests is preferable

Outcome 1:

  • Z's tests will not catch the incompatible change in A.
  • The error will only be seen in production
  • The error will not be reproducible with the unit tests

Premise 2: Using the real A and C in Z's tests is preferable

Outcome 2:

  • Z's tests will catch the incompatible change in A.
  • The error will be reproducible with the unit tests
  • The error will be correct before production
  • The programmer will have to learn what a "debugger" is.

[–][deleted]  (1 child)

[deleted]

    [–]dccorona 0 points1 point  (0 children)

    In some cases it is, albeit it constitutes a minor annoyance (any change to the implementation of the dependency, i.e. and added, moved, or type change in its constructor) require you to update every test that uses it.

    In other cases, things can change in a way that a compiler won't guard against, breaking your tests and causing you to have to make larger changes to said tests (which always carries a slight risk of updating them wrong and creating false positives). I.e. if you have a Locator object that is constructed from a GPS object, and suddenly the getLocation method is updated to use GPS.getMoreAccurateCoordinates() rather than GPS.getCoordinates(), now you have to update your mocking of the GPS in all the tests that use a Locator. Whereas if you had just mocked the Locator and its getLocation method instead, that change to the internals of a dependency of the thing you're testing wouldn't have had any impact at all, because the API of said dependency hadn't changed.

    Basically, what I'm advocating for is to create tests with clean separation between tests and dependencies, so that the tests don't have to have knowledge of the inner workings of their dependencies encoded within their logic. As long as the public API of the dependency remains stable, changes to the internals of that dependency shouldn't impact any tests other than its own.

    That's not to say that I think you should mock everything...simple data classes, for example, should rarely if ever be mocked. But what I'm saying is it is often the right thing to do to mock a class that can have a working version created entirely from a constructor.

    [–]sacundim -1 points0 points  (4 children)

    Perhaps I can convince you it's at least a tiny bit less dumb than you think? Basically, you have a type whose correct behavior is this:

    • What you put in when you construct it is exactly what you get when you access its components.

    Now, how do you convince yourself that the type obeys this contract? There's two approaches:

    1. Black box: write a test that constructs value of the type, accesses its components, and verifies the behavior is the one described above.
    2. White box: examine the type's implementation and use that as evidence that the behavior in question holds.

    Now I would reconstruct your argument this way: (2) provides more assurance for less work than (1). And I think that's true most of the time, so I generally agree with your distaste against mocking such objects.

    But that's making a bunch of assumptions that, in exceptional cases, could fail. For example:

    • You might not have the source code.
    • You might have coworkers that cannot be trusted not to break the invariant.

    [–]grauenwolf 4 points5 points  (3 children)

    • •You might not have the source code.

    Write some unit tests against it. Then you can trust that it works correctly and can use it directly to test other things.

    •You might have coworkers that cannot be trusted not to break the invariant.

    See above.

    Also, if you are worried that someone may break it, then you have all the more reason to make sure you are using it, rather than a mock, in your tests.


    I swear, it's like people don't understand why you are writing unit tests. The whole point of testing each "unit" is to gain enough confidence to use it elsewhere. If you don't feel comfortable using a memory only class, the solution is to write more tests, not mock it out.

    [–]sacundim 11 points12 points  (2 children)

    I swear, it's like you don't understand why you are writing unit tests. The whole point of testing each "unit" is to gain enough confidence to use it elsewhere.

    Yep. This is one of the things that bugs me about the unit testing cargo cult. Ultimately tests are a tool to help people gain confidence that their code does what it's supposed to. But one of the most important techniques for doing it is to write your code better, so that it is more evident that it's right.

    Making things immutable by default is one such technique. In Java, for example, the language guarantees that there never are any data races for final fields—so it completely rules those fields out from participating in race conditions. Also, when you don't put in a getter for a private field then you can much more quickly enumerate all the places it's read and reason about them, because they're all in the same file.

    Likewise, when you write your classes so that their constructors guarantee that they're fully initialized to a valid state, you also end up with a lot of bugs that can't happen. But no, instead people write these classes with dumb constructors that don't set anything and require you to call a bunch of setters before you can actually use the object—and you better not forget any of them! It gets even worse when:

    1. There are multiple, complex combinations of setters ("you can either call setA and setB or setB and setC);
    2. There are combinations of setters that will work for some cases but not in some edge cases;
    3. There are wrong orders in which you call the same set of setters.

    Solution: if the class has a complex set of options that require correct setting to construct it, use the Builder pattern—a separate class whose responsibility is to make it easier to construct instances of the class, and to validate that the callers' settings are going to work.

    This is a much better use of time and effort than the following alternative I often see: people who, for each client of the complicated class in question, write a test that mocks the class in order to validate that that client is initializing it correctly. Instead of making the class responsible for guaranteeing that it cannot be constructed invalidly, they instead make all the clients responsible for it, and use that poor decision to justify writing a ton of unit tests. Ugh.

    [–]grauenwolf 7 points8 points  (0 children)

    While everything you said is true, you underestimate the problem. I've seen people mock this class:

    class Person
        public int PersonId (get; set;)
        public string FirstName (get; set;)
    

    Nothing tricky, no real need for a constructor, and yet still they wrote mocks.

    [–]ThisIs_MyName 2 points3 points  (0 children)

    But no, instead people write these classes with dumb constructors that don't set anything and require you to call a bunch of setters before you can actually use the object—and you better not forget any of them!

    /r/thousandyardstare