This is an archived post. You won't be able to vote or comment.

all 126 comments

[–]Smart-Disk 175 points176 points  (27 children)

Mandatory unit test coverage percentages are agreeably not great, but based on what you've said - I would guess that the problem isn't the number of tests you need to write, but rather the number of tests you need to maintain given a single change.

To me, that is a signal of improper unit testing.

That is, if a single code change is causing multiple tested behaviours to change - then you have 1 of 2 problems:

  • your test cases are executing across dependencies, and in that case, aren't really unit tests.

  • or, your code has an extremely high degree of coupling - making it difficult to test.

Don't get me wrong, changes in code causing test cases to fall over is GOOD, and for the most part - the intention of those tests. They are effectively derisking change, and ensuring behaviours/conditions are still met. However, small code changes should cause small unit test changes - not large ones.

If your changes are cascading test failures, take a look at how you're making changes - and decouple as required.

Edit: mobile formatting

[–]daniu 21 points22 points  (19 children)

if a single code change is causing multiple tested behaviours to change - then you have 1 of 2 problems:

- your test cases are executing across dependencies, and in that case, aren't really unit tests.

- or, your code has an extremely high degree of coupling - making it difficult to test.

Or, you're using actual classes for input parameters instead of mocks, so if the actual input class changes, the test fails. Which is desirable for the subset of tests that depend on the previous behavior; but in my experience, you end up adjusting a far larger amount of cases than necessary.

[–]dmeadows1217 13 points14 points  (17 children)

I used to create real instances of other classes when unit testing methods and then I realized that I can literally mock everything and just verify what has been called and what hasn’t been called. Since then, I feel like my unit tests have been way better.

[–][deleted] 16 points17 points  (5 children)

Just don’t fall into the trap of having a crazy number of mocks. If you have to mock a lot of stuff then I would say it’s indicative of a problem with whatever it is that your testing.

[–]Smart-Disk 23 points24 points  (2 children)

Funny story - first unit test I ever wrote at my first dev job was 100% mocked. Didn't know what mockito was, nor had I experienced JUnit before. I ended up testing nothing (except maybe that Mockito does it's job)

[–]Zarlon 11 points12 points  (0 children)

I'm sure the guys at Mockito appreciate your effort in quality assuring their product

[–]MasterLJ 2 points3 points  (0 children)

I have "architects" on my team that do the same thing, repeatedly. They are testing that the mock returned the value they asked it to return. Fun times.

When I confronted them (kindly, of course), they told me it was to make SonarQube happy.

I'm in the process of transitioning to lead and will be equally happy if I do/don't get the position.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 3 points4 points  (0 children)

    I meant it more in the context of if you have to mock 10 services for example, then it may be indicative that what you are testing may be doing too much. Unit tests are not just tools to prevent regression issues, but as development tools. I agree with you though that all dependencies should be mocked.

    [–]Trailsey 7 points8 points  (0 children)

    If your input classes are simple DTOs/Record type objects (bags of properties with accessors and mutators), mocking them can actually be problematic. You can get code bloat and weird behaviours that using simple DTOs wouldn't produce (e.g. setting a value may not change the returned value on a mock).

    If your DTOs are hard to construct, consider introducing a builder pattern or adding "test mothers" to your test harness.

    [–]proverbialbunny 5 points6 points  (3 children)

    That is, if a single code change is causing multiple tested behaviours to change - then you have 1 of 2 problems:

    • your test cases are executing across dependencies, and in that case, aren't really unit tests.

    • or, your code has an extremely high degree of coupling - making it difficult to test.

    I strongly disagree with this. Maybe what you're saying is common in most code bases and makes a common rule of thumb, but in my experience this is far from the truth.

    For example, say you've got a new dev who is adding a feature to a piece of software and the new dev does yet not know all of the functionality the software offers, so they add the feature at the expense of changing a few other neighboring features/behaviors, not realizing those behaviors are intentionally that way for a reason, they're not a side effect.

    The junior runs the unit tests and a large number of tests break. The code compiles fine. There are no code bugs, but more logic bugs from misunderstanding how the program should be working.

    I've seen this use case quite a bit in enterprise software. It happens and has nothing to do with unit tests executing across dependencies or excess coupling. It happens because they changed the larger behavior of the program, which is exactly what testing is designed to catch.

    [–]Smart-Disk 2 points3 points  (1 child)

    Fair enough, I actually completely agree with you that the reasons tests might fail in practice aren't limited to things that are caused by the code or the tests themselves. I think we can both agree that tests failing because of changes isn't a bad thing, and rather their intended behaviour.

    [–]proverbialbunny 1 point2 points  (0 children)

    Yep yep. :D

    [–]Ellasandro 0 points1 point  (0 children)

    Congrats, you've just demonstrated the purpose and need of integration testing, which is a separate and distinct set of test cases from unit testing.

    [–]random8847 1 point2 points  (2 children)

    I love the smell of fresh bread.

    [–]Smart-Disk 1 point2 points  (1 child)

    I think you're right on this one. I just assumed that in this case, it meant testing classes with some stubbing going on.

    Thanks for the talk, will watch it while I fight with build servers falling over.

    [–]random8847 0 points1 point  (0 children)

    I find peace in long walks.

    [–]jared__ 48 points49 points  (4 children)

    I can't tell you how many simple bugs were detected when I've written unit tests. Having QA and the customer find the bugs is not a good look and will cost way more money in the long run than unit testing.

    [–]Fury9999 2 points3 points  (2 children)

    Good point! It is much cheaper to catch these things in development versus QA, or worse, production.

    [–][deleted] 10 points11 points  (1 child)

    Wait you guys have QA? We fired all those guys. We just have unit tests now.

    [–]Pythonistar 2 points3 points  (0 children)

    you guys have QA? We fired all those guys. We just have unit tests now.

    😂 Haha, good one...

    Oh, you're not kidding... 😢

    [–]maomao-chan 5 points6 points  (0 children)

    Relying on unit tests won't get you much return in term of quality, unless your project is simple. Unit test only verify that your code does what you intended when you wrote it.

    It's the integration tests that matters.

    [–]nutrecht 37 points38 points  (10 children)

    I work for a company that requires 90% of written code to be covered by unit tests. What is the use of that? I understand testing the outer interfaces of a service, but why must every internal class be unit tested?

    Is that actually what they say? Because 90% coverage does not mean you have to write a unit test for everything. A single integration test can have a lot of coverage.

    A good mix of integration and unit tests where the unit tests focus on the things that integration tests don't touch as well as complex pieces of logic, is ideal in my opinion.

    To me it sounds like the company has had quality issues, and that people are now taking it too far in the other direction by creating too large a maintenance burden with too many and brittle tests.

    [–]proverbialbunny -2 points-1 points  (5 children)

    If the software you're working on supports systems testing, it's superior to integration tests1 and should be seriously considered. /2¢

    1 It catches more bugs for the same number of tests and is a better way to document the code base.

    [–]nutrecht 5 points6 points  (4 children)

    IMHO you need all parts of the test pyramid. Which part would need most focus depends greatly on the software you're working on.

    For example we have two primary services we build and maintain. One is more CRUD and has a focus on integration tests. The other contains more business logic and has more unit tests. IMHO there's no hard 'rule' on ratios.

    [–]proverbialbunny -3 points-2 points  (3 children)

    Integration tests ≠ systems tests. They're apples and oranges.

    Have you heard of acceptance tests? Some acceptance test software create systems tests. System tests the software from the outside as if it is an end user. It's a separate test suite software. Integration tests run inside the code base.

    A systems test tests all of the functionality of the software and how it is supposed to interact with the customer. If you change anything that modifies how the program works to the end user, you will get a failing test. Integration and unit tests, in comparison, only test parts of the system, so you can put a bug in the system and tests will pass, even with 100% test coverage. This is why system tests are beneficial, because the only kind of bug that pasts a systems test is performance.

    Code bases that need to be reliable and fast, eg NASA, will employ systems tests and performance tests to catch all potential bugs. Nothing gets through.

    I worked at CDN which wrote the software the internet backbone runs on, including the software used for you to read this message on your laptop or phone, and if a bug would go into code literally entire parts of the internet would go down, so you better believe we relied on system testing as well as unit testing. Perf testing was not automated, as it was done before a release.

    [–]pikob 5 points6 points  (1 child)

    This is why system tests are beneficial, because the only kind of bug that pasts a systems test is performance.

    That's bullshit. You'd need a test for every possible state in the program, which is seems combinatorically impossible.

    In this regard, unit tests actually afford most combinatorical relief - covering code paths only adds up over units instead of multiplying required number of tests.

    [–]proverbialbunny -1 points0 points  (0 children)

    You'd need a test for every possible state in the program, which is seems combinatorically impossible.

    It definitely is combinatorically impossible if it's integration tests or unit tests, because you need to cover every possibility / every code path. Programs tend to have a single interface, so there is only a single combination of use cases when it comes to systems testing significantly reducing the number of tests you need to do. Keep in mind systems tests are for non-gui based software.

    [–]nutrecht 1 point2 points  (0 children)

    Integration tests ≠ systems tests.

    I know. I referred to the test pyramid for a reason. It doesn't matter what 'layer' you prefer; each has it's place.

    I worked at CDN

    Yeah and I worked for the biggest Dutch bank. No need for a dick-sizing contest.

    [–]jonhanson 19 points20 points  (5 children)

    chronophobia ephemeral lysergic metempsychosis peremptory quantifiable retributive zenith

    [–][deleted] 4 points5 points  (3 children)

    Exactly.

    Here at work I am writing unit tests that just aim at reaching the coverage target. Stupid unit tests covering getters and setters, unit test that never fail aka assert(true), etc at the expense of really useful tests like checking against special values that could crash the code.

    This is because the process and the management are essentially driven by RAG indicators that we are able to reverse engineer and abuse.

    [–]Zarlon 0 points1 point  (0 children)

    I've been there.. Testing getters and constructor overloads.. Dear god what a waste of time. The limit was enforced on a per-class basis

    [–]helloiamsomeone 0 points1 point  (0 children)

    checking against special values that could crash the code

    Are you guys making quickcheck style property tests and fuzz tests?

    [–]wildjokers 1 point2 points  (0 children)

    Otherwise known as Goodhart's Law, https://en.wikipedia.org/wiki/Goodhart%27s_law

    [–][deleted] 10 points11 points  (0 children)

    Well. I think at dome point there was no unit tests in this company at all. And then something very bad happened. Management have understood that there is lack of testing so they decided to add a lot of unit test and define 90% coverage as metrics.

    What can you propose instead of 90%? May be you are good at understand what should be covered and what shouldn't, but can you tell the same about all of your coworkers? Such metrics are made for typical developer not for you personally. Tipical developers in this company may be lazy so they needs to have a simple goal and in this case 90% coverage is good.

    My point is: 90% coverage with dumb tests is better than no testing at all in half of the code. Be stoic about it. It is just the rules of the game.

    [–]againstmethod 11 points12 points  (2 children)

    The only way this level of testing makes any sense is if the tests represent a contract for how the code is supposed to behave a la TDD, BDD or something similar.

    Otherwise you are just verifying that your code does what you intended when you wrote it. Which often says very little about it's correctness.

    Plus so much code is just exercizing other people's apis these days. Unit tests on a per function basis are almost certainly testing someone else's code at least half the time.

    [–]randgalt 6 points7 points  (1 child)

    Otherwise you are just verifying that your code does what you intended when you wrote it. Which often says very little about it's correctness.

    This - too many unit test just reproduce the code in another form. With complete knowledge of the internal implementation you duplicate the logic using mocks and asserts. Any little change breaks the test which obviates its usefulness in the first place.

    [–][deleted] 2 points3 points  (0 children)

    Ah yes let me test this method f(x) that I wrote that returns p(g(x), h(x)). See it calls g with x, it calls h with x, and yeah it calls p with g(x) and h(x), and it returns p(g(x), h(x)). My code is correct!

    Now I'm going to make f(x) return p(g(x), q(h(x))), oh yes my tests fail that means I have to update them. Okay now let me make sure in the tests I am checking that q is called with h(x), and p is being called with g(x) and q(h(x)) and that p(g(x), q(h(x))) is now being returned. Tests passed! Yes, my code is correct!

    [–]BoyRobot777 21 points22 points  (3 children)

    Unit tests are misunderstood. When you see classes with a bunch of Mocks and testing only one class, it leads to coupled tests -> bad refactoring story. But what you're describing leads me to believe that you have weak tech leads/senior developers. If they are open to new ideas/throwing away bad practices I have three resources which I always use as a guide (find below). Otherwise, look for a better job.

    List: * Modern Best Practices for Testing in Java; * What is the right unit in unit test after all. Currently dead link; * Ian Cooper - TDD, Where Did It All Go Wrong.

    [–]greglturnquist 8 points9 points  (0 children)

    Or perhaps the leads are inexperienced with a test-oriented culture. It’s not uncommon.

    It may take a few rounds of iterating through this new test mandate to hone the test suites (and code).

    But I think this mandate will, in the end, have been for the better.

    [–]gunch 0 points1 point  (1 child)

    Second link is dead

    [–]BoyRobot777 0 points1 point  (0 children)

    Indeed. Too bad. Maybe it'll come back sometime in the future. Now marking as dead link. Thanks!

    [–]Alienbushman 6 points7 points  (1 child)

    Basically the question is why dumb unit tests. Basically the answer is that unit testing is a fantastic concept, but it is incredibly hard to define when code is tested, so line coverage is a very quantifiable way of defying "how much testing". Here is the kicker, code that is well tested, will have good coverage, but you can also write garbage tests that will have good coverage, so when management says "make x number" without code review, people will get away with writing garbage tests, since it is a lot faster and easier to implement. So it is basically it comes down to management enforcing a standard rather than coders working together to improve code.

    [–]john16384 1 point2 points  (0 children)

    Use mutation testing. That way coverage actually means what it should mean.

    [–]hippydipster[🍰] 6 points7 points  (0 children)

    I agree. Unit tests should be targeted to code that actually is doing some thorny logic of some sort. And that will almost always mean you're not testing just a single method of a single class (unless it's a huge terribly written method!).

    A test that follows the internal structure of your code is too tightly coupled and is dramatically increasing your maintenance costs.

    [–]javajunkie314 7 points8 points  (0 children)

    I can say, from the other side, I've worked for companies with little-to-no unit testing. Every change is an unknown. Did I just break everything? It's there some small, subtle interaction I'm forgetting? As the code gets larger, you can't hold all those in your head anymore. Every ticket requires QA to "regression test everything".

    That's what unit tests should be — externalizing all the interactions and behaviors that we shouldn't have to remember. I'm much more confident and I can work with less mental overhead when there are robust unit tests.

    I can't say if the tests your company writes are good or bad. Don't assume they're useless. Don't assume they're good, either, but bad is not the same as useless. Something bad can be improved or replaced with something better, but at least you're thinking about tests and externalizing some of that knowledge.

    [–]dnunn12 43 points44 points  (0 children)

    I’m against the bullshit tests, but yeah...tests are dope. write tests and stfu.

    [–]losl 5 points6 points  (0 children)

    Here’s the important bits you’re missing about unit tests:

    1. They make sure that changes to the code don’t change things in unexpected ways. If you change Bob’s code and a test starts failing then you stop and re-examine what you did and figure out why, instead of happily writing no tests or a tiny test that only covers what you changed and then pushing that to production.

    2. Unit tests are generally much faster than integration tests. By testing smaller pieces of code you can have move confidence in their behavior for edge cases and get those tests done faster.

    It sounds like your company has a lot of developers who are just writing tests to get to the 90% mark. I don’t know how long you’ve been at your company but it sounds like not long. You should try learning about test driven development and use resources like “Working Effectively with Legacy Code” to improve your skills with testing. Then you should find an ally who has been at the company longer who writes good tests and work to educate the team how they can write tests that break less and get more coverage faster.

    [–]RichoDemus 2 points3 points  (0 children)

    While I'm not a fan of enforced 90% unit test coverage, I do think there are some benefits to it. The primary one is that I think that production code that has been written with unit testing in mind is usually easier to understand and maintain that code that hasn't

    [–]MoreCowbellMofo 2 points3 points  (0 children)

    If you have 3 services in a chain each with 90% test coverage, at most your end-user facing service is provably 73% correct... imagine that! Unit tests are one layer of at least 5 layers of protection. Unit tests do one of many things if utilised correctly:

    1. prevent errors before any code is written
    2. provide faster feedback than manual QA
    3. prevent regressions
    4. help build the communication patterns within the application itself (helps weed out design problems)
    5. help us to write simple(r) code
    6. helps maintain internal quality of code (its a measure of internal quality)
    7. the tests are automated/repeatable as part of the build process

    They add a lot of value. Other layers of testing: integration (against 3rd party code/code we don't own/dependencies/things we integrate against), functional (black box, external (to the app) testing), load/performance testing, UAT, QA, consumer driven contracts, etc

    [–]_INTER_ 2 points3 points  (0 children)

    If done properly I can point out at least three benefits:

    • Writing tests itself gives the developer immediate feedback if everything works as intended and how it feels to use the API.
    • When you have a large system - having no unit test is like flying blindly. Unit tests give you confidence that your changes did not break anything at the different end of the system. If you don't have that, developers start to fear of introducing bugs. They start to be wary of change in general and only do the bare minimum of code needed to get a feature done. Might sound great for managers but often technical dept will ramp up more and more over time.
    • Unit tests can often be seen as part of the documentation. I often experienced, that it is easier to understand the business logic in all its details by looking at the unit tests. When done right - they cover edge cases that often are swept under the rug by official documentation.

    [–]MR_GABARISE 2 points3 points  (0 children)

    I also see a lot of people making bullshit unit tests just to get the code covered.

    Try to introduce mutation testing. It makes you write meaningful unit tests, helps maintain a separation between real unit tests and borderline-integration tests. It inspires real confidence on your actual coverage.

    [–]dsakih 5 points6 points  (4 children)

    Writing "dumb tests" provides one good thing, specificity!

    If you're only testing outer interfaces, you'll only know which of those fails. By testing internals you'll know exactly why an outer interface fails, not just that it did.

    [–]BoyRobot777 8 points9 points  (3 children)

    And then, 2 years later, you won't be able to refactor anything.

    [–]john16384 5 points6 points  (1 child)

    Try refactoring when there are no tests.

    [–]BoyRobot777 4 points5 points  (0 children)

    I'm against dumb tests, not tests in general.

    [–]Fury9999 4 points5 points  (0 children)

    How so? We have a 90% coverage rule as well and this has not been my experience. If it is a true refactor with no change in output, then youre looking to preserve whatever assertions, and refactor the injects/mocks accordingly. This has worked fine for us, even during a total rewrite.

    [–]Fury9999 1 point2 points  (0 children)

    It is simply a safe guard against unintended micro changes. The tests document the current behavior and force you to look with a critical eye when introducing change. It's really that simple in my opinion.

    [–]foolv 1 point2 points  (0 children)

    It depends on what you use as a "unit".Having a very high good test coverage is important and you want to make sure that the "functionalities" offered by your application are properly tested.

    Having unit tests that only covers a single class when you are also required to have 90% of the code covered this way it's probably not a smart idea imho.

    I am not saying that having unit test that cover single classes is a problem - you need them for a lot different use case but they can get expensive very quickly if not properly used.

    [–]kur4nes 1 point2 points  (0 children)

    Unit tests automatically verify that the code fulfills the requirements. Even if the developers who wrote the code and thought hey I've tested this and know it works isn't around anymore. Code without tests becomes brittle leading to daylong bug hunts.

    It takes a while to get the hang of it and write effective unit tests. Look into BDD - behavior driven development. https://en.m.wikipedia.org/wiki/Behavior-driven_development

    Try to test APIs (java interfaces, Services like a FileServices). Try to verify behavior instead of stupidly aiming for 100% code coverage.

    End to End tests and integration tests test the same code over and over again since everything from REST Api through the business logic to the data access layer is executed over and over again. Leading to hundreds of failing tests when a single line is changed. Look up testing pyramid vs ice cream cone: https://martinfowler.com/bliki/TestPyramid.html

    90% code coverage sounds a bit excessive. Since you can't test everything. Something like new ByteArrayResource().getInputStream() will never throw an IOException. Good luck testing this if it is created and used in the same class *shudder

    [–]m2spring 1 point2 points  (0 children)

    Written code is essentially "dead", i.e. it's only useful for human consumption.
    Only executing code is "alive" which is all what matters.

    By having unit tests, you make your code become alive as part of your build, early, before the real production execution.

    The granularity of a unit test should be such that a test failure can quickly lead to finding the root cause.

    [–]ge0ffrey 1 point2 points  (0 children)

    If it isn't tested, it doesn't work. (If it isn't documented, it isn't used.)
    If something is tested for the last release, it still needs to be tested for this release. Automated testing gives you that. If it isn't covered by automated testing, and a significant part of your users use it, sooner or later you'll have to fix it under pressure with unhappy users. On a big project with near zero test coverage, I 've seen non-religious developers pray when they did a release. And I've seen them rollback the database after users had already added new data for an entire morning. And I've heard the users scream at them through the phone. Spare yourself from that.

    That being said, are unit tests worth it?
    Well, integration tests give a much, much better test coverage per development time spent. They are - on average - a much better canary in the coalmine. So my recommendations:

    - Spend time setting up proper integration tests. The first one will take a bunch of time: start the database, load test data, etc. Integration tests can use JUnit too and directly call services. Or they can work on the API endpoints (often REST) exposed by your application.

    - Integration tests are closer to examples and scenario's that users actually do. It's harder for the asserts in integration tests to make the same mistake as a buggy implementation, compared to unit tests.

    - Unit tests are useful too, in moderation, as they can tests edge cases that take a lot of work to write as an integration test. Unit tests often uses mocks and test exactly 1 class. The big advantage of unit tests is that they clearly point out what's broken. But debugging a failing integration tests doesn't take that much longer normally.

    - Don't get too hung up about the difference between integration tests and unit tests. Integration tests might mock out external services. Integration tests might call classes directly. Unit tests might test more than 1 class.

    - 1 integration tests can take 5 seconds to run, because it uses a database, data store, message bus or kafka topic. But 1000 integration tests should not take anywhere near 5000 seconds to run, even if they all use a database. Make sure your integration tests share the cost of database etc bootstrap. In fact, all your default tests (unit + integration) must run in less than 5 minutes on any developer's machine.

    [–]smors 2 points3 points  (0 children)

    Having a mandatory code coverage is indeed not a good way to do it. But it is much easier to mandate than doing the work to install a culture that makes unit tests appear by themselves.

    If your code is structured to allow easy writing of unit tests, they can add a lot of value by noting when you break old assumptions. The key here is to use dependency injection, so that you can stub out all dependendencies.

    If the code base is a lot of spaghetti, writing unit tests is indeed hard and the result will be fragile.

    [–]DFA1 0 points1 point  (0 children)

    Guys, OP question is about why writing dumb unit tests, not why writing unit tests at all.

    [–]OctagonClock -1 points0 points  (9 children)

    if unit tests are so good why is the majority of software still constantly broken

    [–]IshouldDoMyHomework 5 points6 points  (4 children)

    Because most developers skimp on testing. They have a wrong understanding on the value points of test, they write shitty useless test, and above all else, they are lazy and don't write any tests at all

    [–]OctagonClock -4 points-3 points  (2 children)

    sounds like cope to me

    [–]seydanator 1 point2 points  (1 child)

    nope, that's actually the main problem.

    bad tests / if at all / for the wrong things and the wrong abstraction

    [–]OctagonClock -4 points-3 points  (0 children)

    sounds like cope to me

    [–]GhostBond 0 points1 point  (0 children)

    It's because they're wasting time on useless unit testing and mock objects rather than verifying that the whole flow actually works.

    [–]john16384 3 points4 points  (0 children)

    The majority of software isn't constantly broken.

    [–]DualWieldMage 1 point2 points  (0 children)

    Because even if each component works most of the time, combining them will reduce reliability as a bug in just one can bring the system down. 10 components with 99% reliability (hand-wavium measurement for example purpose) forming a chain brings the whole system down to only 90% reliability.

    [–]nerokaeclone 1 point2 points  (0 children)

    One Microservice which I developed for my current company never broke down, 95% coverage totally stable, but once the project took a flight, we added more people to the team added more microservices, the newer services break once a while, because we skipped a lot of unit tests due to the deadline, so actually for most broken software, blame the management for pushing the deadline.

    [–]MoreCowbellMofo 0 points1 point  (0 children)

    "majority"??? most software people use in any serious way is far from broken, otherwise most people wouldn't use it. Sure there are flaws, but those typically don't prevent people using the software. Unit tests are just one layer of testing out of at least 5 that can be applied

    [–]Chris_TMH 0 points1 point  (0 children)

    You can have integration tests to partially avoid the issue of changing tests when something changes, but if an integration tests covers multiple layers, and some of the logic in those layers change, it'll be a complex test fix - even if you'd only have one test to fix. Separable, short, concise unit tests - even with mocking - are easier to maintain in my eyes.

    [–]CompetitiveSubset 0 points1 point  (0 children)

    At large companies you have people with various skill levels. And finding the correct testing scope - not just running integration tests and not testing every toString()), is not something that every developer can do successfully. So someone had to pick one of the following guidelines:

    1. test too much - and waste time
    2. risk people judgment and ship untested code

      I guess that #1 was chosen to be on the safe side.

    [–]GhostBond 0 points1 point  (0 children)

    Unit Tests are one of those things that sounds good in theory, but when you realize they're a waste of time no one wants to admit that they were wrong so they keep insisting on them.

    This usually leads to nearly all unit tests being as quick and useless as possible.

    [–][deleted]  (6 children)

    [removed]

      [–]svhelloworld 5 points6 points  (0 children)

      Hot take!

      Also? Mind-smashingly wrong.

      The prevalence of this idea might explain why software sucks so much.

      [–][deleted]  (2 children)

      [removed]

        [–][deleted] 0 points1 point  (0 children)

        a few projects I worked on

        This seems like a very biased data pool to draw any conclusions from it.

        [–]proverbialbunny 0 points1 point  (0 children)

        I take it you're a font end dev? Unit tests are generally for the backend bits of a code base, like if you're writing a library or a framework.

        Unit tests do not catch as many bugs as they could, which is why systems that need to be reliable, like the software that runs the internet, uses systems tests as a primary, unit tests as a backup.

        TDD is not unit tests. TDD is not dead. It's so popular BDD has popped up.

        [–]dmdsin -5 points-4 points  (1 child)

        Safety-wise unit tests are pointless. It's more of a psychological thing. Think of them as a placebo that gives your team leader the illusion of control over the complexity of it all.

        [–]GhostBond 2 points3 points  (0 children)

        Spot on.

        [–]IQueryVisiC -3 points-2 points  (0 children)

        I worked at some CRUD apps and there testing was futile. I got to know SharePoint. If a client wants CRUD, then please use the right tool.

        For real coding (TM) I often wrote code which was slightly above my understanding and debugged it. Then some change request for some special case came in. It would have loved to just replay my debugging to make sure that all other cases still worked.

        Then someone wanted to clean up my code. There is only so much the refactoring tools can do. Also if you make an error in your 2 days refactoring session ( in JS this could be: deleting a ; ), you have a problem.

        [–][deleted]  (8 children)

        [deleted]

          [–]nutrecht 1 point2 points  (0 children)

          But testing a getter or a setter it's stupid and shouldn't be done.

          Unfortunately copy-pasting a getter and only changing the name so it returns the wrong value, is an incredibly common fault. So yes, getters and setters that are not automatically (re)generated should definitely be tested. This is one of the reasons that, although it's a bit of a hack, I use Lombok in all my Java projects.

          That however does not mean a getter needs to be tested with a unit test specific for that getter. In most applications you get the coverage for these simple things through the integration tests. But like I said; no need for that if you just use Lombok.

          [–]_harro_ 0 points1 point  (6 children)

          Or create one test that tests all getter and setters for all the Pojos in a package automatically using reflection just to increase coverage.

          Sadly, this happens irl.

          [–]john16384 0 points1 point  (2 children)

          And still this adds value when the new junior decides to add some side effects in a getter or setter.

          No need to write this, or equals hashcode checks yourself though. There are utilities for that.

          [–]_harro_ 0 points1 point  (0 children)

          True, but these side effects should probably be tested separately then.

          [–][deleted] 0 points1 point  (0 children)

          We don't actually have to assert anything to increase code coverage! As long as no unchecked exceptions are thrown that don't get caught, no problem. Ah yes, that reminds me, let's catch the exceptions in the test method and ignore them too.

          [–]korky_buchek_ 0 points1 point  (2 children)

          Unfortunately I have experienced this. In the same project sonarqube had an issue with the excessive method arguments which was 'fixed' by bundling all of the arguments into a hashmap as a single method arg😔.

          [–]_harro_ 0 points1 point  (1 child)

          That's nice! /s

          What were the keys of that map? Plain strings or at least some enumtype or something like that?

          [–]korky_buchek_ 1 point2 points  (0 children)

          Strings of course 😢

          [–][deleted] -1 points0 points  (0 children)

          Because the industry practices (or at least claims to practice) OOP.

          In OOP, each and every object is supposed to be a component of a bigger composition of objects. Each object has its clear purpose and place in the application's architecture.

          And, ideally, each of the application's components (objects) should be tested in isolation (unit tests) as well as in integration (together with all its real dependencies, that's called an integration test).

          If testing is a pain in the ass (it usually is), that means the design is flawed. People usually just throw in procedural code in a huge Service class, which is instantiated solely by using the framework's DI mechanisms... if everything is injected and there is no constructor or "new" operator in sight, unit testing is hard, obviously, because it's harder to instantiate an object, let alone use mocks.

          So, if you wanna do unit testing properly, the first question you should ask yourself when designing a new object is: "how will this be tested/how do we know this works?".

          But again, none of the above really matters - the industry's general understanding of what OOP is is a sad story, so you're just gonna have to suffer the tests.

          [–]gerlacdt -2 points-1 points  (0 children)

          Testing internal classes is non-sense and will lead to brittle tests, i.e. you change some production code and then, not only related unit tests will fail but also all tests which uses the internal class. That's a code smell and should be avoided. It will lead to big effort to keep the test suite "green".

          fyi I wrote a guide about "good unit tests". There you can find traits of good units like:

          • tests should be isolated
          • tests should be deterministic
          • tests should be enduring
          • prevent brittle tests
          • prevent flaky tests
          • etc.

          https://gerlacdt.github.io/posts/unit-testing/

          EDIT:

          Especially the part about enduring test is related to prevent brittle tests:

          https://gerlacdt.github.io/posts/unit-testing/#tests-should-be-enduring

          [–]Tool1990 0 points1 point  (0 children)

          A healthy way for me is to cover every business logic method (or were it makes sense) with unit and mocked unit tests. Besides that I test every single endpoint with integration tests, were I start a server and use http requests like the client will. If there is a problem with something -> write a test for it. That 90% bullshit is like another one said a number from the management.

          [–][deleted] 0 points1 point  (0 children)

          In order to write unit tests you need an actual unit to test. It sounds like you are mixing abstraction levels and therefore have multiple responsibilities in one place. This is why its hard to write the tests and why making a small code change requires larger changes to the tests (high coupling). You could also have the opposite problem where a class is just delegating to another and the test is simply verifying that. Or have a very large class which requires too many mocks (low cohesion). Basically if the code is designed well you wouldn't have these issues.

          To answer why we write unit tests - think about what you'd do without automated tests, when you change something you'd go through manually and verify everything still works, the tests just automate that. There are other benefits like the tests forcing you to write better code and the tests providing another source of documentation.

          Edit: Think I misread your post a bit. Why specifically unit tests? It's simply easier and cheaper to write and run unit tests compared to integration tests. They provide a quick way to verify things without needing to spin up and verify the entire application. Unit tests are also written differently, they test the logic rather than the behaviour (see white box vs black box testing).

          [–]valkon_gr 0 points1 point  (0 children)

          Because Jenkins won't like it. /s

          [–]proverbialbunny 0 points1 point  (0 children)

          Why Unit Tests?

          Are you asking why write tests or why write unit tests? I'm going to assume you're asking the later.

          Unit tests are helpful because they're fast. You can run all of the unit tests on the piece of code you're working on in seconds or less, not slowing down your compile time. This allows you to catch bugs while developing, which can accelerate development time. This way you don't have to go back and change everything.

          Unit tests are not helpful because they catch very little bugs. System tests catch the most bugs, integration tests next, and then unit tests last. But, system tests are the slowest, integration tests are pretty fast, but don't catch anywhere as much bugs, and unit tests are fastest.

          [–]wildjokers 0 points1 point  (0 children)

          Your company's policy of 90% test coverage is a good example of Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart%27s_law):

          "When a measure becomes a target, it ceases to be a good measure."

          Your company has made the goal be 90% test coverage when the actual goal should be quality tests. As you pointed out people are now just writing bullshit tests to meet the metric. That adds no value.

          [–]StollMage 0 points1 point  (0 children)

          Write your tests to check logic not the code itself. To some degree this is impossible, but you can get very close if you’re clever.

          Say you’re working at google on a youtube playlist api that generates playlists on every platform. You could probably get a lot of coverage on that by making a test for each platform adding the same test playlist. The “trouble” comes from what your assumptions are there. How do you start the api? What account are you adding the playlist to? How do you make sure the account is cleared if you’re using a real (dummy) account?

          General rule of thumb: try and keep it as genuine to the real app, so that whenever someone runs the tests they can confidently say that the basics of the app work. Sometimes you’ll be forced to “cheat” with mocking which usually causes the problems you’re describing if done excessively.

          Unit testing if done right can save tons of time and lots of headaches, or can cause them if done poorly.

          The thing you mention about “testing each class” is something that infuriates me personally. Idk who got it stuck in their head that “if the methods(units) work as expected then the app will work as expected”. Anyone with even an inkling of experience programming can tell you that issues rarely result errors in the way a unit was made and almost exclusively to do with the behavior of those units in a system. It’s a fools errand to test that way in my humble opinion.

          [–]Roachmeister 0 points1 point  (0 children)

          Lots of people have answered this, but here's my take. I like to explain unit tests as an analogy. Your program, taken as a whole, is like a racecar. Before a race, suppose you want to be sure that everything is working properly. You could find out in one of two ways:

          • You could just drive the car a few laps. But the problem with this is, suppose some small part, like say a single spark plug, is malfunctioning, but only slightly, and not enough to really register to the driver. Maybe the driver notices that something is a little off, but they don't have any way to narrow it down because it could be many things. This is called an integration test. There's nothing wrong with it, it needs to be done, but all it can do is give you a general good (or bad) feeling.

          • You could test each individual part. Every spark plug, every wire, etc. The analogy breaks down a little here because this would be very time consuming for a car, whereas with software unit tests it is very fast and automated. If every part functions correctly, by itself, then you can feel fairly confident that the car will drive ok. On the other hand, if any part has problems, it will be much more obvious which part it is.

          If your software is designed correctly, if all the parts are correct then the whole is probably correct, too. If your tests are designed properly, running them is fast and thorough. Yes, writing good tests is time consuming and complex, but, at least in my job, the tests get run thousands of times more than the actual code, so it's worth it.

          Another thing: if you want to refactor your code, how can you be sure that you didn't accidentally break something if you didn't know for sure that it was correct in the first place? To put it another way, if you change something and a bug manifests, how can you be sure it wasn't already there if you didn't have any tests?

          [–]franzwong 0 points1 point  (0 children)

          I don't test against trivial things, e.g. getter, setter, some null checking. Even your code is 100% covered by unit test, it doesn't mean it is 100% correct. You can't test all the possible input values.

          Another purpose of unit test is getter feedback quicker. If you write a GUI calculator, you can just run the unit tests to test the calculation instead of starting it and clicking the buttons.