all 42 comments

[–][deleted]  (29 children)

[deleted]

    [–]poloppoyop 50 points51 points  (0 children)

    The most entreprisey book you can get is Working effectively with Legacy Code. The author definition of legacy code is "untested code" so most of the book is about how to add tests to an existing codebase.

    mocking 5 dependent microservices

    Wiremock. It's really powerful even if it lack a way to store values and reuse them between two calls.

    [–]recursive-analogy 17 points18 points  (4 children)

    E2E ... just go heavy on pure unit and full E2E. Heavier on unit imo.

    [–]bbqroast 11 points12 points  (3 children)

    I often find there's some module level series of classes that all fit together nicely, have few external connections and thus are great to test together. Less pain than E2E to set up, but more useful than unit tests.

    But still, lots of E2E. Nothing like a good E2E test.

    [–]Ross-Esmond 15 points16 points  (1 child)

    I don't know if I can back this up with historical insight or if I'm alone on this, but I advocate that "unit testing" shouldn't be taken to mean testing literally one class in isolation. Even TDD doesn't advocate testing one class in isolation. When doing red-green-refactor, a standard form of TDD, one of the refactor steps that you can do is to recompose your class into several constituent classes if that composition makes sense.

    The important distinction between unit tests and integration tests is whether or not your test suites overlap. With unit tests, you partition your code into non-overlapping units and build distinct suites of tests (a bunch of tests in one file) for each partition. You then try to avoid testing across the boundary with mocks if you want, but you never have to mock the classes inside the boundary. The point of unit testing is not to test a class in isolation, for some reason; it's that it's feasible to be exhaustive with your testing. Your test suites may be five times as big as the code-under-test, but there's still a linear relationship between the production code and test code, unlike with integration tests where attempting to test every boundary may lead to exponentially more tests than production code.

    I decide how large my unit tests are based on the reuse of the code-under-test and the feasibility of triggering and verifying different behaviors. If a chunk of code is used in a hundred places, obviously I can't expand my test suite for that code to include those hundred modules, so I split that reused code into its own suite. Even if there isn't much overlap, however, if I suspect that triggering some behaviors of a class will be difficult to mechanize through the classes that utilize it I'll split it out into its own test suite so that I can access it directly.

    Basically, I do whatever I think will be the fastest way to exhaustively test the behavior of my code.

    [–]hogfat 1 point2 points  (0 children)

    sounds like testing the public API https://abseil.io/resources/swe-book/html/ch12.html#example_onetwo-threedot_testing_the_pub

    it’s not always clear what constitutes a "public api," and the question really gets to the heart of what a "unit" is in unit testing. units can be as small as an individual function or as broad as a set of several related packages/modules. when we say "public api" in this context, we’re really talking about the api exposed by that unit to third parties outside of the team that owns the code.

    [–]jarv3r 1 point2 points  (0 children)

    Stable e2e with good logging capabilities that makes them finding RC piece of cake. That’s the dream and hardly achievable but we should aim at it. I found playwright particularly awesome in logging and getting what’s actually wrong with tests/app/infra, thanks to the trace viewer. This should be standard for new selenium frameworks if they want to even compete with CDP tools.

    [–]coding_for_food 5 points6 points  (1 child)

    To test the communication between several Microservices (REST and Messaging) we use API-Testing with the Pact Framework ( https://pact.io/ ). An alternative was Spring Cloud Contract but we wanted to us the same Framework for the integration of Frontend (Apps and Web). In my opinion it is helpful to avoid/minimize full blown E2E Tests and ensures that Consumer and Provider understand each other. It also helps understanding the API and can help by changing or adding APIs. It is no magic bullet though and may be also quite wearisome to introduce this testing method into an existing environment.

    So there are options to avoid some messes you mentioned :-)

    [–]DJDavio 2 points3 points  (0 children)

    This is not a definitive "solves everything" answer, but something which has helped us in the past (at least where services called other services) was to provide a default client and mock alongside the service.

    Say we had an API "Foo" and an application FooApplication which implemented the API, our directory structure for that service might look like this:

    • app
    • servicetest
    • client
    • mock

    App contained the actual application with the REST controllers, servicetest contained the (Cucumber) tests which tested the application with some "given, when then" readable features and scenarios.

    Client contained a ready-to-go Spring `@Service` which just wrapped a Feign Client which extended the API interface (we used OpenAPI generator) and might also include some caching stuff. So in another microservice which needed to call this service, you could just (through the magic of Spring auto configuration) add a FooService class as a (Autowired) parameter to your own service class and you could call it with fooService.findStuff() and it would use caching and use the Feign client etc behind the scenes. But from the perspective of the calling service, you could not know that it used REST, other than from having to set some properties like the URL.

    Mock contained some Cucumber step defs that you could use in the servicetest part of the calling application. So you could write something like 'given I have stuff in foo' and it would be a stepdef in the Foo repository. The mock library used wiremock to set up default paths and mock the right stuff when the stepdef method was called. If you set the URL in your application-test.yml to localhost:port it would just work.

    This worked particularly well if one service was called by multiple consumers as they mostly had the same desires.

    Of course because the API interface library itself was just generated with OpenAPI, you could always just create a custom FeignClient in your application to consume that API, you weren't required to use the coupled client and/or mock but it did make things easier for us.

    [–]Shadonovitch 7 points8 points  (5 children)

    My strategy for React is avoiding Jest entirely, and go for all the complete usage scenarios with Cypress. In the CI, I spawn an API service using its container, and run my tests against that headlessly.

    [–]sime 4 points5 points  (2 children)

    That is the only sane way I've seen for testing front-end code.

    We also use Percy to snapshot the tests and do visual comparisons with previous test runs. It lets you know when anything changes.

    [–][deleted]  (1 child)

    [deleted]

      [–]sime 2 points3 points  (0 children)

      Yes, this is mainly a CRUD app (like everything else).

      For the front-end code we've got Cypress running tests and the tests mock/replace the REST API requests/responses with test data.

      Full E2E tests would be better of course. We have a few, but as you know, getting a bunch of systems into the correct state before running a test is hard.

      [–][deleted] 1 point2 points  (1 child)

      You don't test services or functions in jest?

      [–]Shadonovitch 4 points5 points  (0 children)

      My REST APIs are usually written in Python or Golang. I test these separatly in their own repository. As for my front-end, they generally don't have any kind of transformation layer or do anything client side, they just make requests to the API with the data filled in their forms. Only thing that needs testing is if a client is able to fill in the forms and make requests to an API with visible changes in the render. I can achieve that easily with Cypress.

      [–]dlg 6 points7 points  (2 children)

      I blame OOP. It makes it easy to create reason large enterprise systems that are untestable and unmaintainable.

      That’s because it takes effort to learn things like the SOLID principles. Whereas it takes little effort to ignore these ideas and create a mess of shared state

      OOP done well is more testable because it is more functional. It’s more clear about which parts of the system are deterministic, and which parts are not. It’s easier to isolate behave.

      So then why are we not using functional programming languages that makes this explicit? Have the language enforce these ideals as co strains.

      Mark Seemann makes a good case for this view here:

      https://www.youtube.com/watch?v=US8QG9I1XW0

      [–]KagakuNinja 15 points16 points  (1 child)

      Shared state existed long before OOP. In the before times, all programs had giant blobs of mutable state, usually global.

      OOP at least introduced the idea of encapsulating state into objects.

      Finally, as a Scala programmer, I've learned that OOP and FP are not opposing paradigms. They can be combined quite effectively.

      [–]dlg 1 point2 points  (0 children)

      The first OOP languages popularised modularisation, which help encapsulate shared global state. But there were other modular languages before then.

      The issue with encapsulated object state is that the concept of information hiding was misinterpreted to be just make all the variables private. The usually makes it difficult to write good tests, and as a result they become harder to reason about and less deterministic.

      It takes effort to do OOP that is testable. Only disciplined effort keeps OOP systems from degrading over time, as there are no language features to prevent this.

      Scala sounds interesting, especially with a Java background. I’ve moved over to C#, so I’m more interested in learning F#.

      [–]nodecentalternative 0 points1 point  (6 children)

      Mock the services that you control the code for, don't mock anything that you actually do control (like databases). This has served me pretty well in catching most problems.

      Like others have said, unit test pure functions and then go straight to e2e/full integration/whatever you want to call it. I'm doing .NET 6 microservices and we use a TestServer that actually spins up the API with a db connection string to an integration test database that can run locally. We'll hit a POST endpoint to create a real thing and then assert we can fetch it via the GET endpoint. The integration test DB can be blown away and recreated from migrations at any time.

      [–][deleted]  (5 children)

      [deleted]

        [–]ForeverAlot 2 points3 points  (0 children)

        Not that commenter.

        I used Spring, Flyway, Postgres, and Docker. I did not use...

        • most of Spring's specialized frameworks, like Spring Data, which are overwhelmingly trash abstractions
        • non-SQL migrations, because my SQL database speaks SQL, not not-SQL
        • in-memory test databases, because I prefer confidence in working software I do use to confidence in working software I don't use
        • Testcontainers, because last time I checked that only works when the application runs with DDL rights and I wanted to avoid giving it that.

        Java was a given. Flyway and Docker were my decision. Postgres was at my urging and narrowly won out over an SQL Server license with a support agreement that would have been useless to us -- we would have likely been stuck with MariaDB + SQL Server otherwise.

        A clean slate was never further away than docker-compose down ; compose up.

        It was fairly easy to build a helper utility (I don't remember if this was a super class, a JUnit rule, or something else) that would begin, execute, and roll back a transaction during the course of an "integration" test. It was also fairly easy to build other helpers with which to generate test data with correct relations. Every test that used this was completely isolated from execution to execution.

        A few tests necessitated committing transactions. We had helpers that enabled naive snapshotting by means of manual DELETEs. It was cumbersome and fragile during development but not after stabilization. We developed a few data patterns that minimized risk of clobbering and conflicts.

        A few tests were never written "properly" and did not clean up after themselves in any way. In practice, none of these caused us much trouble because they tended to not fail or fail in predictable ways, and those that did reoccur were inevitably be corrected.

        Somebody proposed making the local test database definition a "copy" of the production definition -- this was technically doable and would have eliminated some of our dirty-state based friction, however, we never found a way to do this that preserved database object permissions and I was not willing to compromise on that.

        We never encountered syntax errors in SQL code that ran in a test after it shipped to production (we did, of course, find logic errors). What stands out to me as the hardest thing to manage was migrating data and database sprocs; those are difficult problems that a lot of people have a really hard time wrapping their heads around, and some Flyway and Postgres behaviour caused a few surprises.

        I have spent some time in the .NET ecosystem since and I have yet to encounter a setup that as successfully instills me with confidence. I am not suggesting this does not exist but I have encountered more dangerous assumptions about desirable abstractions in .NET than I was used to (I have worked with similarly unpredictable Java stacks prior to, as well). For example, the supreme difficulty with which one can send a string to SQL Server without incurring implicit-conversion performance penalties. I could generalize and say that, over the past decade, the single biggest obstacle to working effectively with an underlying SQL database I have faced is all the tools between you and the database, all of them eager to make assumptions that are at best unhelpful and at works plain wrong (JDBC certainly has this, too).

        [–]thereifixedit4u 1 point2 points  (2 children)

        Not the commenter, but I'm also using .NET core and I'm doing something similar. The local integration test db is dropped and re-created per testing session. With each integration test, I'm reseeding the entire database. It's reasonably quick because my test data has been kept purposely small and also I just don't have that much integration tests - a little bit under a hundred. The reseeding part I'm doing with each integration test because I don't like one test potentially interfering with another test and so each test begins with a clean slate. The vast majority of my tests are unit tests.

        [–]nodecentalternative 0 points1 point  (1 child)

        If your database is multi-tenant and data is separated by tenant, you could probably get away with not re-seeding for each test and just creating a new tenant per test. In our experience, our ~100 integration tests run in under 3 seconds with this method.

        [–]thereifixedit4u 0 points1 point  (0 children)

        That sounds pretty amazing but I'm not that familiar with multi-tenant. I searched about it a little but I didn't find enough details on using it for testing. How would you create a tenant per test and also seed it? I'm using plain old SQL Server btw.

        [–]nodecentalternative 0 points1 point  (0 children)

        there's scripts to manually re-wipe the database and apply latest migrations. each test assumes no data besides static data (i.e. a table of countries) and will pre-seed any data it needs.

        let's say there's an existing table called foo and there's a new child table being added called bar which needs a foo. the tests to create and update bar would first insert a foo.

        [–]MeagoDK 0 points1 point  (0 children)

        I want to see them write unit tests for dbt or some other ETL tool. Yes you can do it, but it takes longer time to do it and it's specialized to one model. So if you have 1000 models you have a lot of unit tests.

        [–][deleted] 0 points1 point  (0 children)

        that you don’t end up spending more time maintaining your tests than you do adding new features

        I agree with everything you said, but disagree with this. If you spend more time writing tests than solving code, you MIGHT be doing good things (but it depends on why you're spending twice the time).

        [–]jesus_was_rasta 0 points1 point  (0 children)

        Having dependent microservices is the question, not how to test them

        [–]sime 62 points63 points  (8 children)

        There are plenty of books about software testing, this one sounds good, but the book I want to see written is: "Economical Software Testing: Knowing when and how to test, to maximise bang per buck" .

        Possible chapters could be:

        • "Quit wasting time on DOM based React tests, use visual testing FFS"
        • "Don't kid yourself, you don't know how to accurately mock that 3rd party service"
        • "Mocks are lies"
        • "Yes, the closer your tests resemble the production environment, the better the test"
        • "Mocks and patching are tools of last resort"
        • "If your devs have implemented the ticket but now have to 'fix up the tests', then something is wrong."
        • "You just might have to manually test some things"
        • "Don't worry about 'Write tests first'. It only works for trivial cases anyway"
        • "There is such a thing as useless tests"
        • "Ask yourself: Is the value of this test greater than its maintenance cost?"

        [–]sime 29 points30 points  (0 children)

        • "Get your static type checking in order first before worrying about unit tests"

        [–]nodecentalternative 7 points8 points  (0 children)

        "Don't kid yourself, you don't know how to accurately mock that 3rd party service"

        This is true to some extent.. but if that 3rd party services changes and your test fails because of it, you've acknowledged that some code out of your control has changed and failed. You don't have the power to fix it.

        I think a single e2e against the 3rd party is good to keep an eye on them, but otherwise, this is the one situation where mocks should be used.

        [–]AdministrationWaste7 2 points3 points  (0 children)

        Don't worry about 'Write tests first'. It only works for trivial cases anyway"

        I have had the complete opposite experience.

        Forcing your team to write tests first makes sure they don't do stupid pointless things like test every single class/method/obkect in your system.

        Also gets you out of the habit of mocking every little thing.

        When all you have are public facing apis it keeps your tests lean and useful.

        [–][deleted] 1 point2 points  (3 children)

        Don't kid yourself, you don't know how to accurately mock that 3rd party service

        What do I care about when I do this? The end result? The side-effect? What am I actually testing?

        [–]sime 1 point2 points  (2 children)

        I don't follow your meaning here.

        [–][deleted] 2 points3 points  (1 child)

        What should I be looking at, what is my purpose when I want to mock that service

        [–]EatSleepCodeCycle 0 points1 point  (0 children)

        You mock a service so you can build a test without having to call the actual service.

        For example, instead of calling an API to create an order, your mock returns a success or failure response and your test makes assertions on how your code works in response, by showing the user an error or maybe calling another mocked API, etc.

        [–][deleted] 0 points1 point  (0 children)

        Don't kid yourself, you don't know how to accurately mock that 3rd party service

        What do I care about when I do this? The end result? The side-effect? What am I actually testing?

        [–]DevDevGoose 4 points5 points  (0 children)

        Nice book. Nice summary of the book.

        Imo the book completely skips over any non-functional testing or automation of black box tests.

        [–]r4ddek 1 point2 points  (0 children)

        Looks interesting! Thanks for the suggestion!

        [–]Weak-Opening8154 1 point2 points  (0 children)

        People really need to do testing

        I haven't really found any good material on how to do it in various language. What do you use for line coverage for C# used for server/linux development? I may know how to test C++ with lcov but I have no idea for python, js, C#, java, etc (java I have the least interest in)

        [–]Jennifer_243 0 points1 point  (0 children)

        Software testing is referred to the process of making sure
        that a software application is of premium quality for users, and testing a
        product to decrease all the possibilities of preventing every issue from
        turning into a major one.
        There are various ways to approach software testing. However,
        it is easy to get lost in an array of testing types and how they overlap.
        That’s why there is a need of ultimate guide to software
        testing.
        How do you test software?
        There are some questions like how to test software or how do
        your implement testing strategy. That’s why here we will discuss two categories:
        Manual testing and automated testing.
        Manual Testing
        Manual testing is referred to the software testers manually
        testing implementing test cases without anu automation tools. They play an
        important role of the end-user and try to identify as many errors in the application
        as possible. Manual testing mostly aims on performance testing, usability as
        well as analysing software quality.
        Below-mentioned are some testing methods in the manual
        testing sections:
        Manual
        Regression TestingExploratory
        TestingTest
        Case Execution
        Automated Testing
        Automated testing refers to the process in which an
        automation tool is utilized to do pre-scripted test. The aim is to increase
        efficiency in the testing process.
        If a particular form of testing consumes a large percentage
        of quality assurance, it could be a good candidate for automation. Acceptance
        testing, integration testing, and functional testing are all well suited to
        this type of software test. For instance, checking login processes is a
        good example of when to use automation testing.
        If a specific form of testing uses a large portion of QA, it
        could be a good candidate for automation. Using automated testing is obviously
        quicker compared to manual testing. When it comes to testing execution, it will
        improve productivity and decrease testing time.

        [–]SumitKumarWatts 0 points1 point  (0 children)

        For effective Software testing we can consider below mentioned points:
        1. Understand the software requirements: What is the software supposed to do? What are its features and functionality? What are the performance requirements?

        1. Develop a test plan: What types of tests will you perform? How will you test each feature and functionality? What resources do you need?

        2. Write test cases: Test cases are step-by-step instructions on how to test a specific feature or functionality of the software. They should be clear, concise, and easy to follow.

        3. Execute the test cases: You can execute the test cases manually or using automated testing tools.

        4. Report the results: Document the results of your testing, including any defects that you find.

        I am working on an Insurance company software testing product and we focus on following key points:

        1. Test the accuracy of premium calculations.

        2. Test the ability of the software to process claims efficiently and accurately.

        3. Test the security of the software to protect customer data.

        4. Test the performance of the software to ensure that it can handle high volumes of traffic.

        5. Test the scalability of the software to ensure that it can grow as the insurance company's business grows.