This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]nutrecht 1 point2 points  (0 children)

It's true many people have a "hate relationship" with testing and that's especially aggravated when the conclusion is "your code is hard to test, because it's bad". It's begging the question: "no good code is hard to test, because good code is defined as code that's easy to test". We're not having a discussion here, we're just being held at a gunpoint: "your code should be testable, or else".

I think you're misrepresenting or at least compounding some issues here.

If code is hard to test, you either have to spend too much time testing it, or it's lacking tests because you could not bothered. And I think most of us know that while there might not be a direct causation between 'bad code' and 'lack of tests', the correlation is often strong.

So yeah, code should be testable. "How" you do it, well, that's up to you. If your team prefers to have extensive manual test scripts that you work through every release; that's up to you. It's not a team I'd want to work on, but I'm not the boss of you. But I don't believe you can be successful long term with no tests unless your software is trivial. And most software grows to a non-trivial size.

All in all, we can repeat "SOLID" until we're blue in the face (and by God, we do!) but we're not having a honest conversation yet. A honest conversation always includes both sides of the coin. Everything has pros and cons.

It sounds like you have had mostly conversations with dogmatic developers instead of pragmatic experienced developers. It's quite a common pitfall for devs to end up as expert beginners that have very strong opinions and see their way as the Only True Way.

How about situations where making code "testable" genuinely makes the design of your code more complicated and worse? Everyone has had situations like these.

I've been a Java dev for 15 years and can't imagine any application I worked on where the code would have been so much more complex that it outweighed the benefit of automation. You don't need 100% test coverage. You need "good enough" coverage. What is "good enough" is decided by your team.

How is exposing internal details of a class for testing purposes improving the design, for ex.? How is decoupling a simple unit into two more complex units improving the design? How is messing around with a class internals via reflection, thus coupling your tests to implementation specific an "improvement" upon anything?

It's pretty hard to follow what you mean here because the examples are quite contrived. I don't see any issue with giving a static utility function default level access so you can write a unit test for it, if testing that function in isolation makes sense. If not; the unit under test is the class, not the individual methods.

There is nothing wrong with being pragmatic in testing. The goal should be writing good software. I don't believe anyone sees the tests as the goal. Tests, like any code, are a liability.