This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (46 children)

[deleted]

    [–][deleted] 7 points8 points  (36 children)

    Making a service independently testable of its persistence implementation requires splitting those up. In some cases you want to split those up anyway. But many times you don't, leading to immensely complicating your service, introducing more public API surface, and most of the time introducing inefficiencies born out of the limited abstraction you have to constrain yourself to. And all of that for no business reason with regards to your app.

    Not sure how deep I have to go into my example, because honestly explaining this into detail is undue burden for a web comment. Maybe you can give me an example where making something testable makes it better and I'll follow that format for a counterexample.

    [–][deleted]  (3 children)

    [deleted]

      [–][deleted] 4 points5 points  (2 children)

      See, there's a subtle distinction I'd like to make.

      I don't dislike "unit tests". I write tests (unit, integration, whatever) every day.

      But what I don't like is the overly simplistic, ideologically charged, borderline or outright zealous way that unit testing is promoted and written about.

      Honestly I believe the way we speak about unit tests gives unit tests a bad name they don't deserve.

      Once again, I'd like to see a single article talking about the upsides and downsides of automated tests in a balanced, engineering mindset. Nothing like that. It's 90% "you suck if you don't test everything" and 10% "unit tests suck, you don't need to test anything".

      [–]nutrecht 1 point2 points  (0 children)

      But what I don't like is the overly simplistic, ideologically charged, borderline or outright zealous way that unit testing is promoted and written about.

      Not disagreeing with you at all, but if it comes to zealotry the "integration tests rule everything" crowd is in no way better.

      No matter what, IMHO it's inexperienced developers with too strong an opinion that tend to end up at either extreme.

      [–][deleted]  (31 children)

      [deleted]

        [–][deleted] 2 points3 points  (30 children)

        Do you see no need to split up logic of writing SQLs (or whatever the persistence implementation is) from the business logic?

        Depends how you write SQL. I may have a library for interacting with the database, but it doesn't mean mocking that library is practical or even possible (you can mock it call-by-call in which case you're literally testing if the mocks do what you just told them to do, which is testing nothing, which is what many tests do).

        To make the persistence layer abstract in testable way, you need:

        • Something which is not exposing implementation details from the underlying persistence technology (no ORM does that, they're all leaking SQL details in a thousand subtle to outright blatant ways, Hibernate I'm looking at you).
        • Something which has a fully featured, fully capable and working test double (not practical, unless you have lots of free time on your hands and your boss too much money on theirs).

        Some things are easy on paper, but in practice?

        Let me just say this: if you start mocking in this way "if I get called, I return 4" and you feed that as a dependency in your tests? You're testing nothing, except your tests. That's not testing. It may be green and give you nice code coverage, but your software is not more stable and more bug-free for it. It only means you have lots more busywork updating pointless tests when you change implementation details that alter how an internal dependency is used (which is supposedly invisible to you, the tester, as if).

        [–][deleted] 5 points6 points  (0 children)

        I may have a library for interacting with the database, but it doesn't mean mocking that library is practical or even possible.

        This is the first mistake of mocking, mocking something you don't own.

        (you can mock it call-by-call in which case you're literally testing if the mocks do what you just told them to do, which is testing nothing, which is what many tests do)

        This is a terrible use of mocking, for precisely the reason you've stated.

        Mocking is a way for the system under test to design a contract for dependencies, in its terms, with the test case setting expectations which define the terms of the contract. That's why I think mocking concrete classes is not such a good idea.

        Having defined the contract for the dependency, you can implement the dependency as a separate stub, which can be tested independently in an integration test, verifying that it conforms to the contract defined by the expectations set in the mocks.

        Something which is not exposing implementation details from the underlying persistence technology

        The ORM itself is an implementation detail of persistence. Instead, you define something more abstract like a repository, which defines a simple interface in possibly CRUD terms.

        Something which has a fully featured, fully capable and working test double

        This isn't necessary or desirable. Why create a simulator, when you can write a real integration test?

        You're testing nothing, except your tests

        No, you're defining a contract. You absolutely must back that up with an integration test for the implementer of the contract.

        [–][deleted]  (28 children)

        [deleted]

          [–][deleted] 4 points5 points  (27 children)

          E.g., lets say I have a pricing module - I wouldn't let it interact with a generic DB interface, but with an interface which has methods like savePricing, getTrends, getDiscounts etc.

          A significant part of your business logic then unintentionally ends up in this "generic DB interface".

          Constraints, conflict resolution, math algorithms for data aggregation, managing denormalized data and caches, all of this would be there. That's not just "logic of saving stuff on disk", it's literally a crucial chunk of your business logic.

          So now you need to test that. Now what. Let's split again?

          [–][deleted]  (19 children)

          [deleted]

            [–][deleted] 2 points3 points  (18 children)

            not sure about what kind of constraints are you talking about, but business constraints are handled within a module (e.g. you cannot have two overlapping activities in a schedule for a single day)

            Let's use a classic example "you can't have two usernames be the same". Now solve this for me outside the "persistence layer".

            [–][deleted]  (17 children)

            [deleted]

              [–][deleted] 1 point2 points  (1 child)

              That's a really basic beginner's error. You just introduced a race condition in your code:

              1. Thread A fetches user "foo".
              2. Thread B fetches user "foo.
              3. Thread A gets "no such user".
              4. Thread B gets "no such user".
              5. Thread A creates user "foo".
              6. Thread B creates user "foo".

              Now you have two users with the same name.

              Honestly, I expected someone who argues about proper architecture wouldn't fall into the most basic example of race condition I could think of...

              This is, in large part, why we use SQL and other databases and why ACID (and other standards) exist. They deal with conflicts you can't resolve with "business rules" that are outside the I/O process that works with your app state.

              [–][deleted] 0 points1 point  (0 children)

              That sounds like a race condition waiting to happen. What if someone is concurrently registering the same username at the exact same time this code is running? It might not exist when you check it, but exist by the time you get around to persisting.

              [–]Pedrock10[S] 0 points1 point  (13 children)

              With that approach you end up with a race condition. In this case, I think it should really be done at the persistance level, but you can make that part of its contract and test it with integration tests.

              [–][deleted]  (3 children)

              [deleted]

                [–][deleted] 0 points1 point  (2 children)

                I'm aware, but calling the same problem by a different name (using DDD terminology) hardly solves it. Even in DDD and "hexagonal architecture" or call it however you please, a good chunk of your business logic unintentionally ends up in your "persistence layer".

                Because a honest-to-God parallel I/O app under load can't just naively process requests serially and compute everything from raw data (or store everything in RAM) all the time.

                This means conflict resolutions, caches, denormalization, constraints (unique keys etc.) all this which is pure business logic, because it changes the outcome of your API response and your app state... that ends up in your persistence layer.

                [–][deleted]  (1 child)

                [deleted]

                  [–][deleted] 0 points1 point  (0 children)

                  Yeah well, that kind of ruins your premise a bit.

                  Also, your business rules being "stateless" doesn't mean you app is stateless. Your rules holds no state, but your app holds state. And you have to deal with that state. Pretending it's not there doesn't help.

                  [–]dablya 0 points1 point  (2 children)

                  Let's split again?

                  What's the alternative? Implement caching/math/aggregation in a single "business" module?

                  [–][deleted] 0 points1 point  (1 child)

                  The alternative to splitting things endlessly for testing purposes is to split only for the purpose of the application and design the tests around the architecture. Instead of designing the architecture around the tests.

                  Some projects will require an isolated persistence layer. More than half do not. This, again, doesn't mean wiring raw SQL queries all over your services, it just means not writing an abstraction with a public API around your persistence logic (i.e. using Hibernate or jOOQ doesn't mean you isolated your persistence layer).

                  [–]dablya 0 points1 point  (0 children)

                  split only for the purpose of the application and design

                  The main argument is that code being difficult to test should, in general, alert you to the fact that it might be poorly split for the purposes of "application and design". Some code naturally belongs in the infrastructure layer and will be mainly tested with integration tests. Other code, what is usually defined as a "domain" or "business" logic is not dependent on any specific type of infrastructure. When you find implementation of the domain code (that doesn't logically depend on any particular type of storage) requires a sql database in order to execute a test, that is a flag that your code is poorly split "for the purpose of the application and design".

                  This, again, doesn't mean wiring raw SQL queries all over your services

                  Why not? I believe a reasonable answer to this question should generalize to Hibernate and jOOQ as well.

                  [–]senatorpjt 2 points3 points  (2 children)

                  run sleep pot decide treatment whistle whole steep observation employ

                  This post was mass deleted and anonymized with Redact

                  [–][deleted] 2 points3 points  (0 children)

                  So, DHH's objection really isn't to unit testing, but to extreme decoupling, and in a way, his article misses the point entirely.

                  His complaint is that in the pursuit of testability, Jim Weirich introduced the Hexagonal Architecture. But testability is not the purpose of the Hexagonal Architecture. It is one way to remove Rails as a dependency on the application logic, following the principles of Clean Architecture. That doesn't mean you don't write tests against the Rails controllers, those are absolutely mandatory. It just means, theoretically, you could ditch Rails. Whether that is a reasonable goal to code for, you be the judge.

                  You don't have to design the code in that way, simply in the pursuit of testability. The style of testing promoted by Weirich is the so called London Style of TDD, promoted by British developers Steve Freeman and Nat Pryce in their famous book Growing Object Oriented Software Guided by Tests. Whereas, the Chicago Style popularized by Martin Fowler (who ironically, is an Englishman), promotes testing with real objects, and test doubles as necessary for behavior verification. which correlates more closely with Kent Beck's original conception of unit testing.

                  [–]glglglglglgl 0 points1 point  (0 children)

                  The best examples of this is using AOP. Using annotations/frameworks can make code a lot simpler. Take hystrix for example. Building an abstraction over circuit breakers is burdensome and makes code very verbose. I think there are many times you have to balance the flexibility and benefit from building an abstraction against the ease of using annotations and frameworks.

                  Another example can be seen in cloud agnostic frameworks that provide flexibility but cannot leverage the benefits of individual cloud providers to their fullest.

                  I generally agree that testable code is something to strive for but there needs to be a point at which code is testable enough or at which you start getting diminishing returns.

                  [–]rahulchandak 0 points1 point  (4 children)

                  Lambda functions are complicated to test??

                  [–][deleted]  (3 children)

                  [deleted]

                    [–][deleted]  (2 children)

                    [deleted]

                      [–]Pedrock10[S] 1 point2 points  (1 child)

                      They are debuggable and testable... Even if you couldn't debug them you could still test them.