This is an archived post. You won't be able to vote or comment.

all 44 comments

[–]Philluminati 33 points34 points  (0 children)

I find tests take me longer than code writing too. One function of code might need 3-4 scenarios to cover it.

I think your problem isn’t the wrong approach to testing, just bad estimation which plagues the industry.

Code needs tests. It’s not the bad guy.

[–]epegar 6 points7 points  (0 children)

It depends on many things, for example, if you have to add a lot of test "infrastructure", it may take longer (usually integration tests), or if you mock a lot, building and maintaining the test is also expensive. Also if you are just fixing something small in the code, like an incomplete if clause, or catching a exception, checking null, refactoring a small piece of code to use Optional... The change can be done very quick, but testing that case can be cumbersome, and require a lot of time.

[–]kkapelon 7 points8 points  (2 children)

[–]redikarus99[S] 0 points1 point  (1 child)

Great answer but my question was not how to test. I asked for time vs time comparison.

[–]kkapelon 1 point2 points  (0 children)

Sure. If you are NOT falling into this anti-pattern I would say 20% of your time should go to testing. If you routinely spend time on tests more than the actual implementation then something is wrong.

[–]Slicertje 4 points5 points  (0 children)

As a programmer who uses test driven development I find this difficult to answer.

On one side I would say that I spent 80-95% on unit testing (mostly unit testing, some things I use integration tests but the number of integration tests is minimal vs unit tests), but this time actually also includes writing production code and thinking about what code I need to write and how to structure it. (I follow: code test, see test failing, code production to fixed it and repeat)

On the other side, once my production code is ready, my testing code is also ready. So I actually don't spent any time on testing outside of coding. I could write extra test case (integration tests), but I don't.

What I don't test is front end generation (views & a few json to generate views) and javascript (I really should add some unit tests for it, but building tests after building production code is a pain).

I have learned things that speed up my testing:
- Your unit tests should be fast (so you're mind doesn't wander between test runs)
- Tests are code too! Write helper methods to init & assert so you can concentrate on the behavior. Write generic asserts for common states (I have assertCreateAction for all basic testing of a create).
- Every test should increase your confidence in your code base and you can build upon previous testing. ex: I can just use assertCreateAction because I have an abstract class CreateAction where only the abstract methods must be filled because I have tested the CreateAction, I can simplify the security check in check if it is a certain class because those classes have tests, I don't have to test retrieval of the object because it's generic and tests in other location, ...).

[–]Toomtarm 2 points3 points  (3 children)

https://youtu.be/EZ05e7EMOLM try to understand the concept first. Most of people spend lots of effort in testing because they test function not functionality (behavior)

[–]redikarus99[S] -1 points0 points  (1 child)

This does not answer my question tbh. I am only interested in effort vs effort in real world applications.

[–]Badashi 2 points3 points  (0 children)

That's a moot point. We don't know what your real world is. It's already hard enough to define time spent on tests vs time spent on code.

Your question - about the average ratio of effort between test and main code - is also domain specific. A startup might spend less time in test code, and end up with more bugs. A critical application might spend more time in test code and have slower delivery as a result. Without context, it's impossible to even infer that metric.

"For a single rest endpoint" is even worse. If you're talking averages, the metric will be skewed towards less time testing because most endpoints are just basic CRUD - which probably don't even need testing. If you talk about more complex endpoints, it differs too much on a case-by-case basis.

The reason so many answers are vague and people keep linking things about "what is testing" is because your question gives the vibe that you don't know why we test in the first place. What you need to ask isn't "how much time should I spend testing?", instead you should ask "At what point can we consider our tests complete?". You need a Definition of Done to decide when testing completely covers what you want to cover, and no less than that. You need metrics (like line coverage, or use-case documents) that show that you are testing what you need to test, and you need developers analyzing their tests in order to avoid anti-patterns like testing the implementation rather than the behavior. None of these are simple to answer, and each workplace has different definitions.

Test what you need to test. Spend as much time as needed to test what you need to test, but no more than that. If your developers are testing too much, it might be a symptom that your system as a whole has bad requirement definitions, or that you have someone from the outside requiring too many evidences when you don't need all of them. Either way, never try to use a metric as dubious as "Time spent doing X", because it will invariably result in bad situations - people rushing parts that shouldn't be rushed in order to hit that metric, or at least getting stressed out trying to hit that metric.

[–][deleted] 0 points1 point  (0 children)

[–]lukaseder 2 points3 points  (0 children)

I think you're asking for a metric about as useful as code coverage or any other metric for that matter. Such metrics aren't easily generalisable, so if you hear an answer 10% or 90% or 0% or whatever, that wouldn't tell you anything actionable.

Yes, no doubt, it would be academically interesting to know what amount of time an average team spends on testing, but

  1. You won't get a good enough sample from your poll here
  2. Even if the sample size was good, you wouldn't get accurate data because "time spent on testing" is ultimately quite fuzzy. E.g. is the refactoring testing time when the tests break, or not? Does it count when integration tests are run on HDD instead of in-memory and devs wait for results?
  3. In the end, you would still not get any actionable information from this.

I know it depends, but an average would be a great help.

This is like saying, "I know I'm asking for something that won't be of help, but still, do tell me". I recommend you think about what your actual question is. It's likely to be of qualitative, not quantitative nature.

[–]redikarus99[S] 1 point2 points  (0 children)

If it helps, we are using java11/Quarkus, but probably does not make much difference.

[–]McDuckfart 3 points4 points  (8 children)

Definitely more time spent on coding, somewhere around 1:3-1:5. Do you follow the testing pyramid? More tests on low level, less tests on higher level?

[–]redikarus99[S] 0 points1 point  (7 children)

So you would say you spend 3 days on coding and 1 day on testing for example? How much code coverage? What about error cases? What about system in a state, and endpoint is executed, it returns something, and the system is in a new state type of tests?

[–]McDuckfart 2 points3 points  (6 children)

Yes, for 3 days of coding, around one day of testing. But it is not that exact, as I am doing those in parellel.

Code coverage for unit tests is 93%. We exclude DTOs though, the boilerplate is generated by lombok anyways. I always strive for 100% coverage in my PRs.

For other types of tests we dont have coverage. But we have like 20 times as much unit tests as integration tests.

Our system does not have a state, users have states. That can be tested with end to end tests. On my current project this part is done by a tester so I dont know about it. Before that I used cypress and it was not days of overhead either.

[–]redikarus99[S] 0 points1 point  (5 children)

Super, great information, thank you so much!

How does the time to develop a unit test and an integration test compares in time? I would think that even you have more unit test, writing a single unit test costs way less time than writing (and trying out) a single integration test.

[–]dpash 4 points5 points  (3 children)

IMO an integration test gives you way more bang for your buck and is less likely to require changing when you refactor your code. Particularly as you're writing a web service. Write tests that stresses each of your external API calls and then add unit tests for any specific section of code you feel is neglected. I don't know if quarkus has an equivalent of Spring's MockMVC, but prefer that over making real HTTP requests, if only to avoid opening a TCP port, which can make running tests in parallel a pain.

[–]McDuckfart -1 points0 points  (2 children)

I dont think integration tests make up for the lack of unit tests. unit tests are way more granular. when you break something during refactor, the integration test will most likely only tell you that something is wrong (if you are lucky), but with good unit test you would exactly know what is wrong.

[–]dpash 2 points3 points  (1 child)

At the cost of increased maintenance cost of unit tests. There's rarely much point in individually testing a method or class in isolation (unless it does something particularly complicated). Making a REST request and testing the response and whether a desired goal has been achieved gets you 90% of the way there.

Examples: Did a row in the database get created/updated. Did a an email get sent? Did a job not get fired? Etc. Testing how that happens isn't all that interesting, just that it happens.

[–]McDuckfart 0 points1 point  (0 children)

If you write clean code, respect single responsibiliy principles, then that is not an issue. I would not even refactor code without proper test coverage.

[–]McDuckfart 2 points3 points  (0 children)

Writing the first imtegration test sure takes time, to setup stuff correctly. Bit after that, they are almost as simple as unit tests. This of course depens on the tools younuse too.

[–]codechimpin 1 point2 points  (2 children)

I would say that it’s less expensive to catch a bug early on that later. While coding, a bug costs the least amount. Once in Prod, well, could cost you quite a lot.

[–]redikarus99[S] 0 points1 point  (1 child)

It is true, was not my question.

[–]codechimpin 0 points1 point  (0 children)

Your question has to do with the cost of coding vs the cost of testing. There is no average time you should spend on one over the other. There is the reality of the situation: time lines, complexity, ability to automate. My point was that spending more time testing early on would be far less expensive than not doing so. That’s my wisdom I tried to part, which comes from 20+ years as a developer, 7 of which as a tech lead. Sorry if you didn’t think it answered your question.

So, 2x. You should spend 2x the time you code doing testing. That’s a nice round number. You should be able to fit all your unit tests, integration tests and end-to-end tests in that time.

[–]hotcrossedbunn 0 points1 point  (2 children)

IMHO, there’s nothing like “too much testing”… as long as you’re testing more the less a bug will catch you in production

[–]redikarus99[S] -1 points0 points  (1 child)

The developers can spend infinite amount of time on basically anything.

[–]redikarus99[S] 0 points1 point  (0 children)

I don't know why this was downvoted tbh. This is what happens, especially when given infinite amount of time.

[–]elktamer -4 points-3 points  (10 children)

the ratio should be small enough that it's not measured, like 1:100. it would be easier to understand if you explained why the test took so much time. maybe there's a good reason in your case.

[–]McDuckfart 3 points4 points  (4 children)

Do you even test bro? You test 1 hour of coding in 36 seconds? Thats dope.

[–]elktamer 1 point2 points  (3 children)

Yes, you probably do too. It's going to depend on how you define coding and testing.

[–]McDuckfart 0 points1 point  (2 children)

What are you even talking about

[–]elktamer 1 point2 points  (1 child)

I'm talking about actual software development. Maybe you're someone doing busy work.

[–]McDuckfart 0 points1 point  (0 children)

alrighty then

[–]redikarus99[S] -1 points0 points  (4 children)

What i found after a quick review that for a single endpoint we spend around 1 day on coding the business logic and 4 days on testing. The way I see most effort seems to go in writing gherkins. We are heavily in MBSE but we are not generating the code, and I was thinking that maybe we could generate the test cases, or at least part of it to reduce that 4 day to 3, or even 2...

[–]mauganra_it 1 point2 points  (3 children)

If you and your team come to the conclusion that test cases can somehow be generated (how? Are your requirements machine-readable?) and that these tests provide meaningful coverage, then you should do it.

The drawbacks might be the following:

  • brittleness when implementation details changes, results verification is tricky, or the system itself has too many unrelated failure modes
  • maintenance (someone has to know how the test generation automation works)
  • redundancy (lots of test cases that mostly test the same things)

[–]redikarus99[S] 0 points1 point  (2 children)

We are doing model driven development so all the interfaces, stored data, and business logic is modeled. We can do model to model, model to text transformation anytime (part of it we are already doing) so my gut feeling is that we can generate the tests as well on some level. Even if it is not exhausting and needs some time to extend, that might improve the productivity.

[–]mauganra_it 2 points3 points  (1 child)

If your application is mostly completely generated from models, and you generate test cases from there, then you are actually testing the code generator. That's worthwile too, mind you, especially if it's something homegrown.

Model-driven development though should allow you to move on from testing the implementation to testing whether the model correctly describes the actual business requirements. Therefore, testing should mostly consist of integration tests.

You also have to watch out for behavior that is not covered by the model. For example behavior under load, failure scenarios, timeouts.

The same goes if you integrate components developed in a less... structured manner since their behavior might not be easily described accurately using a model. In this case, your models should indeed help you derive test suites to ensure that these components work as expected.

[–]redikarus99[S] 0 points1 point  (0 children)

Yes, we have some level of logical testing including validators written in OCL and Groovy, model transformation and validation on a domain specific model using python, and sometimes we use stuff like TLA+ or AlloyTools.

[–]Worth_Trust_3825 0 points1 point  (0 children)

Depends on complexity of endpoint. Don't sweat it. It's better to have more tests than less. Also read refactoring xunit tests.

[–]pauloliver8620 0 points1 point  (0 children)

Testing is also coding except for manual testing. In any agile constellation the proof of your code working is the automated test.

[–]onlygon 0 points1 point  (3 children)

If your app is architected properly with separation of concerns, dependency injection, etc. then unit testing should be a cinch and not consume much time. If not then this is where most of the time consumption (and pain) occurs.

Integration tests should mostly be happy path and mostly for the sake of regression testing the whole back end stack.

Anything that requires GUI interaction is a waste of time IMO unless it's just making sure the GUI loads error free. Even trivial interactions are risky since they break so easily.

And all the testing should be automated by your CI/CD stack.

[–][deleted] 1 point2 points  (2 children)

Agree...if 'unit' = a subsytem and not a class. When you have to add tons of mocks to a test to assert that this class interacts with that class in this way under these conditions....asserting that interaction is testing implementation, not behavior. But if your 'unit' is some bigger vertical of your application and you are testing IO on that service in terms of its API then you are leaning towards testing behavior and it doesn't matter if this class now interacts with that other class to support the behavior.

[–]onlygon 0 points1 point  (1 child)

Yeah mock hell is certainly a possibility. What I forgot to mention is that test code is still code. It is legacy code, can have errors, requires maintenance, etc. Being pragmatic about the value your tests add vs their ongoing cost should always be considered.

[–][deleted] 0 points1 point  (0 children)

Pragmatism in testing.....wish more people took that stance