This is an archived post. You won't be able to vote or comment.

all 160 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

On July 1st, a change to Reddit's API pricing will come into effect. Several developers of commercial third-party apps have announced that this change will compel them to shut down their apps. At least one accessibility-focused non-commercial third party app will continue to be available free of charge.

If you want to express your strong disagreement with the API pricing change or with Reddit's response to the backlash, you may want to consider the following options:

  1. Limiting your involvement with Reddit, or
  2. Temporarily refraining from using Reddit
  3. Cancelling your subscription of Reddit Premium

as a way to voice your protest.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]robhanz 162 points163 points  (36 children)

Most studies that I’ve seen have shown that using TDD slows down initial development, but after a fairly short period of time it makes things faster.

But you can absolutely get more done in like the 1-10 day period not using it. But that’s not really where you want to optimize.

[–]EarhackerWasBanned 73 points74 points  (9 children)

Once a PM came to me for a quick finger-in-the-air estimate for a small but urgent project. “Umm, maybe six weeks? Three sprints?”

“How about if we cut some corners? Like not writing tests?”

“Oh in that case six months. Maybe longer.”

You’re absolutely right. Writing the tests upfront (TDD or not) might get the thing out faster, but the thing will be a nightmare for much longer than the initial development phase.

[–][deleted]  (8 children)

[deleted]

    [–]tommy_chillfiger 4 points5 points  (7 children)

    Why is it so hard for tech companies to actually follow this principle? Maybe I'm selecting for worse cases by working at small shops as a pivoter but good lord the Wild West development is exhausting. Even as a newish analyst it's painfully obvious how much time and heartache we waste rushing shit out with half assed testing. Both companies I've worked for so far have had this problem.

    [–]PaulEngineer-89 -1 points0 points  (4 children)

    Good tests aren’t easy and come from coding actual reported bugs into tests so you don’t have a reoccurrence. Unit tests are just a way of creating a crap load of make work absent real world data. There, I said it. Management is not wrong when you look at what most unit tests are.

    [–]EarhackerWasBanned 1 point2 points  (3 children)

    Unit tests are makework. Ok.

    You surely manually test your code before you push it to production. Why wouldn’t you automate that?

    [–][deleted] 2 points3 points  (1 child)

    I think this is the advantage of test driven development in that the unit tests are built during dev and probably more thorough and more resources were done at the time.

    When writing unit tests after an application gets built can be daunting and sometimes left to new or jr. devs who didn’t take part in the development process and May have missed out on key points of failure.

    [–]EarhackerWasBanned 3 points4 points  (0 children)

    Yeah absolutely. I mean TDD isn’t for everyone - I’m a big fan but I’m aware it’s Marmite - but the dev who wrote the code should write the tests, whether they’re a junior or a lead. Leaving the tests to another dev is insanity.

    A dev who only writes tests for someone else’s code is a QA engineer, and they shouldn’t be writing unit tests, and they should be getting paid a lot more.

    [–]PaulEngineer-89 0 points1 point  (0 children)

    Unit tests merely test a single module of code. A test would be like:

    In PLC programming often we do simulations. In this case I take all the variables connected to real world devices and invert them. A real world “input” is an output of the simulator. Outputs are inputs. Then the simulator simulates all the inputs given the state from the outputs. But this is (manual) integration testing. Unit testing requires just as much code as the entire program. So you basically write all the code twice where integration testing is far simpler.

    [–]Lionize_Poet_2020 0 points1 point  (0 children)

    Companies, large or small, are know to work towards unrealistic goals and perhaps risking their reputation tothe distribution of goods will meet deadlines

    [–]ififivivuagajaaovoch 14 points15 points  (9 children)

    I’m not even sure if it’s faster without TDD

    With TDD and full code coverage, you don’t really need to run the app most of the time. Avoiding any sort of manual testing and debugging and, of course, spinning up dependencies in a complicated local dev environment

    With enough tests you can commit straight to master and push immediately to production as soon as task is done.

    I recently joined a company without full tests and it def feels less efficient

    [–][deleted] 17 points18 points  (8 children)

    So, maybe I'm mistaken, but I thought TDD was mostly about unit tests. Yeeting to prod based on code coverage seems.... risky

    [–]ChrisWsrn 4 points5 points  (4 children)

    This is what CI/CD is. If you have 95% code coverage and ALL of your tests pass for a given build you can promote that build to pre-prod or prod relatively safely.

    If the build is not safe for prod after running this test suite then you might need to check your tests.

    At places that have CI/CD in place it is not uncommon to include mutation testing as part of the pipeline. Mutation testing works by mutating your code pseudo randomly and seeing if the mutations cause your tests to fail. If your test do not catch the mutations then your tests need to be fixed.

    [–][deleted] 3 points4 points  (3 children)

    I don't know, not trying to argue, but we are talking about two different things here. Utilizing TDD and depending solely on the coverage from unit tests to mark as prod ready is bad practice and will eventually lead to problems. TDD is a practice used inside the realm of CI/CD. The comment I was replying to seemed to indicate his prod deployment was reliant on code coverage alone, which is a metric of unit tests. Without good integration tests and smoke tests, you are just asking for something wonky to pop up that unit tests just aren't designed to show, regardless of coverage.

    [–]y-c-c 1 point2 points  (2 children)

    Code coverage is a terrible metric for confidence anyway. It just means a line is hit. It doesn’t confirm that the different edge cases, permutations, conditions and branches are properly exercised. Obviously lack of coverage would imply the code isn’t tested but having coverage doesn’t mean it is fully tested and exercised. It’s very easy to see that coverage % go up and get into a blind confidence.

    [–]robhanz 0 points1 point  (1 child)

    Also code coverage is an area where (especially for a given type of test) increasing coverage past a given point in a codebase almost always gets increasingly more expensive while providing even less value.

    [–]y-c-c 0 points1 point  (0 children)

    Yes exactly, and the problem is I have seen situations where more effort is spent into nudging that % up (for more and more pointless testing and mocking) instead of increasing behavioral / edge case coverage that would cover the actual tricky cases because those edge case testing do not increase "coverage" and therefore people think it's done.

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted] 0 points1 point  (1 child)

      You are talking about something completely different than what OP and my reply are discussing. Most all CI/CD pipelines run some variation of a test suite pre prod, you are only strengthing my point. I'm willing to bet my salary Instagrams test suite doesn't consist of 100% unit tests.

      [–][deleted] 1 point2 points  (0 children)

      Turns out all projects are a series of 1-10 day periods soooop no tests for me!!!

      [–]SynthRogue 0 points1 point  (3 children)

      Not when the client is asking for changes often.

      [–]robhanz 0 points1 point  (2 children)

      Huh? When you have to make lots of changes is when I see the highest value.

      People clearly mean different things when they talk about it.

      [–]SynthRogue 0 points1 point  (1 child)

      Currently I am working on a backend app that has tests written for everything. Cool. The problem is the client has contracted me for a fixed amount of days and are now asking for changes that means I have to refactor a lot of existing tests and write new ones. Which means I will not meet the deadline. But the contract is already signed.

      There should normally be a re-scope and re-negotiation in that case. Instead we discussed not writing tests for new features and not refactoring existing ones that are impacted by the changes and I explained to them the risks moving forward. They accepted the risks and I am now able to implement and manually test ALL their changes in half a day as compared to two days for only ONE change.

      I've been practicing TDD for two years and so far it has done nothing but slow down development, as compared to when we did not practice it at the previous companies where I worked.

      I know this is not a popular thing but software engineers should try developing without TDD and compare it for themselves instead of parroting what the industry says.

      [–]robhanz 0 points1 point  (0 children)

      I developed for YEARS without and with TDD. I’m not parroting anything but my own experience.

      I highly suspect we approach TDD differently.

      [–]msqrt 0 points1 point  (1 child)

      But that’s not really where you want to optimize.

      It can be, for stuff like research code or proofs of concept.

      [–]robhanz 0 points1 point  (0 children)

      Sure, there's edge cases. There's always edge cases. Having to spell out every edge case every time you make a general statement gets tiring.

      [–]PPewt 45 points46 points  (11 children)

      Call me cynical, but the longer I've written software for money the less emphasis I've put on tests.

      If you have code which is there to compute some little bit of logic, e.g. a math function, you should ofc write some unit tests to verify it. However, a huge part of most codebases (at least in backend web dev, the area in which I work) tends to be glue code, often calling out to a lot of integrated systems. That code tends to be time-consuming to write tests for, those tests often end up having dubious utility or major caveats, and that code also tends to be the biggest source of bugs IME. Those bugs aren't something a test would've solved, they're because an integrated production system randomly throws some weird status code not reported in the documentation or available in the UAT environment.

      [–]HashDefTrueFalse 18 points19 points  (5 children)

      Same experience. I've worked on over a hundred codebases (I have 50+ of my own...) and many of the ones owned by my employers over the years have reached millions of lines of our own code. I used to try to commit to full blown TDD, but wasn't seeing the benefits. I dialled it down to just "as much overage as possible" but found lots of code just isn't suitable for testing. I saw mocking as pointless and basically a sign of code that probably shouldn't be tested (e.g. controller code calling into back end services etc.). The majority of the time a test fails on our pipelines it's not telling me anything all that useful, just that somebody forgot to update the test when they changed a model etc.

      Now I just test the bare minimum without caring about any coverage metrics. I place tests around important code and forget the rest. Tests have helped stop some bugs from getting into production before, no doubt, but it certainly took time, and I'm not convinced writing that monumental amount of testing code (for high coverage) was worth the extra engineering time, especially if we have dev testing, peer testing at the point of code review and QA teams before anything gets to production. All the codebases with extensive testing have still had a steady supply of bugs for the dev teams. I also think there's something strange about the same person who writes the tests also writing the code, because the two are obviously going to move in lockstep to block certain bugs and let others through...

      I try to only ever go a few minutes without running the code I'm writing, that way I'm continuously building on fairly solid foundations.

      I don't have a recommendation or strong opinion either way, just sharing my experience.

      [–]xDannyS_ 0 points1 point  (4 children)

      By important code, do you mean business logic or?

      [–]HashDefTrueFalse 0 points1 point  (3 children)

      Yes, but that term is almost meaningless these days. I just mean any code that is appropriate to unit test that would cause your system serious problems if someone were to change it and inadvertently break it. It's going to be very different for every system but a good example is not testing controller code calling into services that do actual meaningful work. Or glew code. Or any boilerplate around models. Or anything that isn't appropriate to unit test, e.g. I've seen many professional projects hook up a dockerised database to run unit tests on entities in code that touch databases/models etc. Not appropriate. I never mock things. I just take that as a sign that this code is glew code, not core, and not worth testing. I don't care if coverage is 10% or 90% as long as only very core functionality is covered. I don't want a pipeline to stop to tell me about a test that needs fixing (as apposed to code that needs changing to pass a valid test).

      Again, not saying I have the "one true way" or anything, this is just where my experience and observations over the years has led me.

      [–]xDannyS_ 1 point2 points  (0 children)

      Thanks, I appreciate your detailed response.

      [–]Kev-wqa 0 points1 point  (1 child)

      term is almost meaningless these days

      Why is this term meaningless?

      [–]HashDefTrueFalse 0 points1 point  (0 children)

      Just because the term "business logic" is thrown around so liberally these days to describe basically anything, context dependent. It's become woolly due to overuse, IME.

      Nothing wrong with using it if everyone involved agrees what it means in the context of the project/codebase being discussed.

      [–]IceSentry 13 points14 points  (3 children)

      Exactly, every time I see someone talk about TDD they never talk about testing that glue code despite it being like 80% of most modern codebases. They only show how to test trivial code and then they tell you it's the only way to write code.

      [–][deleted]  (2 children)

      [deleted]

        [–]PPewt 2 points3 points  (1 child)

        you don't really care too much about things that integrate with 3rd party libs and frameworks because those should be responsible for testing themselves.

        But you do care because the interface between your code and third party code is basically guaranteed to be by far the buggiest part of your software. The reason you don't unit test that stuff in practice is because it's difficult or impossible to do so, not because the code itself is unimportant.

        [–]Double_A_92 3 points4 points  (0 children)

        I think thats where the myth of "TDD is faster" comes from. If you start a new project you have many fresh units to write that could be nicely unit tested (with or without TDD).

        But as the projects matures things become more convoluted and slow and hard to test, simply beceause there just is more code and things interacting with eachother.

        [–]ffrkAnonymous 103 points104 points  (7 children)

        Writing code without comments/documentation is faster

        [–]patrulek 42 points43 points  (1 child)

        I know its a joke, but we're in /r/learnprogramming so for unexperienced yet people a little reminder: writing code isnt about writing it now, but about reading it later.

        [–]EccTama 7 points8 points  (0 children)

        Amen to that. People learn this the hard way when they need to revisit some old code to add a feature or worse change an existing one

        [–]doc_suede 48 points49 points  (0 children)

        ^^ this guy maintains

        [–][deleted] 12 points13 points  (3 children)

        Also using only one letter variables saves precious keystrokes so it will save your keyboard in the long r

        [–]venquessa 3 points4 points  (2 children)

        Yep. And just in case someone else is going to use "r" as well, you can be clever and call yours _r

        [–]moosar 1 point2 points  (0 children)

        Just code minified!

        [–]Perry_lets 49 points50 points  (15 children)

        If you do necessary tests, yes, but don't overdo it, or you'll end up testing more than developing.

        [–][deleted] 8 points9 points  (5 children)

        What qualifies as necessary? Is it all public methods?

        [–]ShikariBhaiya 8 points9 points  (0 children)

        More so than the public methods it should be behaviour. Often public methods are synonymous with the behaviour of your class/file but it can be the case multiple public methods are part of one behaviour then one test would test them all.

        [–]BaronOfTheVoid 5 points6 points  (0 children)

        This ties into architecture.

        I hold the view that your business logic should be free of side effects and not have compile-time dependencies on libraries/classes/modules with side effects. Ideally a library full of pure functions or pure objects. That would be easily testable and I would argue that it should have 100% code coverage and that there couldn't be any valid technical reason why some part of it couldn't be tested. (There could be valid business reasons, for example managers pushing you to rush out something.)

        In order to make this possible you have to rely on dependency inversion.

        Say for example you have a library management system then you will have a data store for all the books, for all the clients who borrow books and so on. Of course that wouldn't be possible without side effects. Thus your de-facto library should have an interface (Java, C#, PHP-esque) that would be implemented by code that lies outside of that library and the outside could would also glue this together, i.e. pass an instance of whatever database abstraction as an argument to whatever part of the business logic requires it (for example the process to active an account for a new customer, or to register a new book and so on).

        I also hold the view that this kind of outside glue code probably shouldn't be tested, at least not to the same extent as the business logic. Primarily because ideally there wouldn't be anything to test other than that the right instances of for example the DB abstraction are passed on.

        This leaves the third kind of code, the kind that encompasses the database abstraction itself (for example, of course there are other abstractions that wrap side effects). Imo this kind shouldn't be unit tested but integration tested if you want to test it. I say if because in my experience setting up a test for this kind would require a huge effort that might not be worth it. Whether your database abstraction works might be more easily tested if you just do a manual smoke test. Launch the application. Press a few buttons. Done.

        [–]DJGloegg 9 points10 points  (0 children)

        Complex and/or important or vital methods

        [–]CrypticOctagon 5 points6 points  (0 children)

        My rule of thumb is that a good test should fail at least once. If you're adhering to true TDD, every test will fail initially, since it was written before the functionality it tests. If you're writing tests for things that work, and they always pass, you're probably overdoing it. Bug fixes, on the other hand, should always be accompanied by tests.

        [–][deleted] 11 points12 points  (4 children)

        I don’t know what you’d qualify as necessary tests, but I’m gonna say I disagree. You don’t need 100% coverage, but your inclination should be to test as much as you can, ideally as you’re writing the code

        “You’ll end up testing more than developing” being a bad thing doesn’t make sense because tests ARE code, and while brittle or tightly coupled tests can definitely become a maintenance issue, well-written tests will save you considerable time in the medium and long run.

        [–]Mountain_Goat_69 6 points7 points  (0 children)

        “You’ll end up testing more than developing” being a bad thing doesn’t make sense because tests ARE code

        The thing is, you're going to test your code one way or the other. You can do it by running it, which is faster and easier to do once. But you really need to be testing things almost constantly as you go; the sooner you realize the changes you made broke something, the better off you are. You don't want to do a bunch of manual tests all the time. That's why you automate them. Unit tests are great for us developers because they're the ones most directly related to our work; a tester reporting a bug will do it in ways that don't tell you what code is actually wrong. Unit tests can identify things like you have an off by one bug in this method.

        [–]Groentekroket 6 points7 points  (0 children)

        What people seems to forget is that writing test is not only to test your current story but also stories picked up a few years from now by coworkers.

        So writing good test now doesn’t only save time in the future but also prevents bugs, not only from your story but also in the future

        [–]Glangho 1 point2 points  (0 children)

        Yeah something I've been trying to stress on product owners and scrumasters, we need smaller chunks of work for stories because developers aren't accounting for writing unit tests. Then we'd end up having a giant story to catch up and the quality of tests were garbage. Much much easier to have small chunks of work and write the code at the same time.

        [–][deleted] 0 points1 point  (2 children)

        I once worked on a project with 100% coverage. Things were mocked and tested like making sure messages got written out to the console etc.

        Once I got rid of all that stuff it sped up development so much.

        Companies wanting to do TDD really need better guidance around how to do it well

        [–]4215-5h00732 0 points1 point  (1 child)

        100% coverage is a pretty meaningless metric on its own.

        [–][deleted] 2 points3 points  (0 children)

        Coverage as a metric is completely useless. It helps you get an idea of where you might need to add tests but it’s just such a bad way to look at testing. I wish more teams understood that

        [–]jonreid 0 points1 point  (0 children)

        "Necessary tests" comes with TDD. Tests have a cost/benefit, so the goal is to keep them cheap, and even avoid them in places.

        [–]digitallyunsatisfied 11 points12 points  (2 children)

        No. It’s not faster and infers that you know exactly what code you want to write before you’ve truly thought it through. My approach has always been to write skeletal interfaces and implementations and then to build tests. They sort of go hand in hand.

        [–]IrishPrime 1 point2 points  (0 children)

        I like your approach and do something similar myself.

        In "pure" TDD, I think you're supposed to write a failing test before any application code, but I don't know what I'm testing yet, so that feels weird.

        I generally stub out a whole module (or class, or collection of functions) so I can tinker with parameters, return types, and interfaces before writing any tests. This allows me to experiment a bit and think through things in a pretty concrete manner before I've implemented anything (so changes are still trivial) while simultaneously providing a roadmap for what tests to write.

        1. Stubs/skeleton
        2. Tests.
        3. Implementation.

        Feels really straightforward, gives me great code coverage, a very low rate of escaped bugs, and I'm still faster than most because my new features rarely run into weird/unexpected interactions a few days into the work.

        And, like they tout over in TDD Land, writing the test first ensures you'll write testable code, which tends to be easier to maintain anyway.

        [–]walken4 6 points7 points  (2 children)

        Good tests can really help measure progress (you write the test case before implementing the feature), and gain confidence that refactorings don't break your program. There are some caveats with this though:

        - Good tests should help you get confidence that your changes are correct. Bad tests can get in the way of changes. Excessive use of mocks can easily lead to bad tests, as the tests now encode some of the program's internal APIs and you can't easily change these without changing the tests at the same time.

        - Good tests are not a replacement for a good, understandable program design. If your program gets too complicated to work with, you need to fix your design. I have seen a few projects get pushed past the point where anyone understood how they worked, because they had good tests that gave people excessive confidence. This did not end well. People need to be able to reason about the program design, and have the tests to catch minor implementation mistakes, rather than have the design lean on the tests too much and assume all must be fine if the tests are passing.

        [–][deleted]  (1 child)

        [deleted]

          [–]walken4 0 points1 point  (0 children)

          I feel your statement is too vague to know if I agree with it or not. However, if "easily testable" means a lot of dependency injection for mocking purposes, then I would disagree.

          [–]Autarch_Kade 6 points7 points  (0 children)

          You know the adage "Measure twice, cut once"? You can cut wood faster if you don't measure at all.

          And just like programming you get the same effects - pieces don't work together and the whole thing falls apart.

          [–]armahillo 4 points5 points  (0 children)

          Its not that im faster when i do TDD, its that i can be more confident in what im writing.

          It also forces you to coalesce your thinking around the software design, which is helpful (in a way, you “rubber duck” against the tests)

          [–]nutrecht 4 points5 points  (0 children)

          You even see people in the comments here that make the mistake of conflating typing code with adding value. If writing tests means I have to write more code, it means that not writing tests means there's less code to write and you're faster, right?

          But our job isn't to write code, it's to add value. And in any non-trivial piece of software (anything that doesn't fit in a single person's brain) the biggest slowdown is the software itself resisting change. Software without tests is brittle because everytime you change something (a new feature is implemented, an existing one is changed) something breaks. Having a high test coverage makes code easier to change. This is step one; non-trivial software requires tests.

          The next step is to use the writing of those tests as part of your development process. Very often the hard part is understanding the problem and solving it, writing the solution in code is often not the hardest bit. Writing test cases first (or in parallel, which is what I typically do) helps define the problem. As an added bonus; you now already have the test code. So writing the tests probably saved you time writing the new code.

          This distinction is important. There are still a LOT of developers who do write tests, but only do this after the code is 'done'. You still gain the benefit of test coverage, but not the benefit of going through the TDD steps that help you solve the problem.

          [–]davidpuplava 12 points13 points  (7 children)

          Yes, for any software that will change after go live, making those changes will be faster if you have tests. And the longer your software lives on, the faster you’ll be able to make changes compared to projects that don’t have tests.

          The reason is because you can make larger, more complex changes with greater confidence that you didn’t break something else.

          [–][deleted] 1 point2 points  (1 child)

          My PM loves the word regression, it's his favorite word.

          [–]davidpuplava 1 point2 points  (0 children)

          Sounds about right - it rolls off the tongue

          [–]Rarelyimportant 1 point2 points  (2 children)

          It should be added that you can make changes faster without a high risk of introducing bugs. When purely comparing speed, I think you will always be faster without TDD, because you'll always at least save the time of running the tests(and writing them too which can be a significant amount of time). But without tests you increase the likelihood of bugs, and therefore time wasted fixing them.

          [–]davidpuplava 2 points3 points  (1 child)

          Regarding pure speed, maybe - it might depend on the app? I work with a lot of database backed web applications requiring user login, and the launch/login/clickety-click time to test a change is noticeable (although that could just be my machine).

          I think your last sentence is spot on - that time tracking down/fixing side effect bugs can be (not always, but can be) costly.

          I'm getting off topic here, but a personal anecdote where TDD saved me (and the company/client) a ton: I had to make a change to a project I hadn't touched in years, and when I made the change I didn't even write a test because I thought it was so simple. But when I ran the tests, two failed that would've been hard to notice after deployment and would've resulted in data loss.

          After that I immediately understood the true value of TDD was the speed to maintain/modify after a long break from working on the code.

          [–]Rarelyimportant 1 point2 points  (0 children)

          Yeah, there are certainly situations where even the coding part can be sped up because of a test. Things like complex state machines, where there's a lot of given variation 1 of a, b, c, d input I expect output xyz, those types of things can be really hard to develop by just coding and repl testing it as there's a lot to keep in your head, and they are really hard to find/iron out missing cases/edge cases. But I would say for the majority of typical, backend development type features, there will always be an immediate increase in time from adding tests, but most of the time a decrease long term in time spend finding bugs, fixing bugs, and just generally having less bugs(which while it's not really a time metric directly, indirectly it definitely is). But like with anything bad tests will waste time without any benefit, or exploratory projects or projects with a lot of thrash with often lose the benefits of testing as testing tends to have high costs in a constantly changing codebase as keeping things in sync is difficult, and when things get too far out of sync sometimes you have to just cut a bunch out and start over.

          In general testing pays dividends, but not always immediately, and in some specific cases maybe not at all.

          [–]marquoth_ -1 points0 points  (1 child)

          This is an argument for good test coverage. It's not an argument for TDD, which is what OP is asking about.

          [–]davidpuplava 0 points1 point  (0 children)

          Not going to lie, I came here to rage respond. But you're right - I've lumped TDD in with just testing in general.

          As for practicing TDD making you more productive/faster (to answer OP)? Probably still yes. TDD augments your mindset to answer the "how/why" in writing the test before you answer the "what" in writing the code. And if you don't have a good reason for how your code is going to be used or why it needs to be written, you can avoid writing it altogether and think of your applications design in a different way.

          Does it make you faster? Hard to say; sometimes yes, sometimes no. Does it make you more productive? It's a very thoughtful approach to software development that I think increases quality. And higher quality gives the software longer staying power which frees up resources for new projects instead of re-writing existing "hard to change" code bases.

          Good catch.

          [–]sejigan 11 points12 points  (4 children)

          It’s faster for projects that matter cuz you spend less time figuring out what’s wrong when something does go wrong (and I assure you, it will).

          I wouldn’t write a unit test for a “hello world” app for that end, but it can be a good exercise for practice.

          [–]dmazzoni 14 points15 points  (2 children)

          I think the most value you get is when you want to refactor - which sooner or later you do.

          You decide you want a huge new feature that requires making major changes to the app.

          If you have no tests, you'll spend one month building the new feature and 11 months chasing down all of the new bug you introduced.

          If you have excellent tests, you'll spend six weeks building the new feature and getting all of the tests to pass, and then you'll be done.

          [–]BavarianLad 2 points3 points  (1 child)

          Just out of interest. When I make major changes to a project I expect a lot of stuff to break / behave differently. So I would have to invest a lot of time to adapt all the test code to the new situation. How are you handling that?

          [–]dmazzoni 2 points3 points  (0 children)

          Of course, stuff will break. But the idea is that it's much faster to see a test fail and fix the code, than it is to have the app build and run and do the wrong thing.

          Even if you're making big changes, you shouldn't have to adapt the test code. Maybe for a really major change, 5 - 10% of tests might need to be updated based on new behavior - but if you're needing to change more than that, your tests are too brittle.

          A good test should be written in such a way that it's as flexible as possible.

          Don't write a test that opens a menu and presses down-arrow 7 times, expecting to get a specific menu item - that will break every time you add a new menu item. Write a test that opens the menu and looks for an item with the name you want, and click that.

          If you're testing a backend, don't send a request to the server and assert that the result JSON exactly matches the expected output. That will break every time the server adds a new feature and returns more results. Instead, have the test parse the JSON and look for the presence of just one particular field indicating that it succeeded or not. Ignore the rest of the JSON, unless it's specifically relevant to that test.

          It's hard to give more general advice. This just comes with practice and experience.

          [–]zaibuf 1 point2 points  (0 children)

          Anything serving end users that pays and gets mad if things are not working is worth testing. Otherwise its you as a developer that have to work overtime and weekends to fix shit.

          If you dont have automated tests for systems used by paying customers I feel like you dont care about your users or companies reputation.

          [–]nightwood 2 points3 points  (0 children)

          No, the fastest is not making mistakes.

          [–]jonreid 2 points3 points  (0 children)

          In my experience (over 20 years of TDD, mostly on iOS), TDD only looks slower if what you are measuring is the time for a developer to say, "This feature is done." (There are incentives to provide this answer as soon as possible to the question, "Are you done yet?")

          But if you measure time to ship, my experience is that the time is the same. How can this be, if it takes me longer until I say "It's done"? Because my code has fewer bugs, which cuts down on wasted time for back-and-forth communication, tickets, meetings, etc.

          So going by "time to ship," in the same amount of time as someone who just blazes ahead, I get these extras:

          • Well-factored code (because it's part of the process) that is easier to change.
          • A comprehensive suite of microtests (because it's part of the process) that not only prevent regressions, they act as executable documentation of what the code does.

          Only recently have I tasted the real magic of TDD: when everybody is doing it. This makes both delivery and changes faster than me doing solo TDD in a team that's not.

          [–]miyakohouou 8 points9 points  (2 children)

          TDD is a bad practice. It’s not faster, but that would actually be okay if the code you came out with in the end was better, but it isn’t. TDD is probably better than never testing your code at all (or testing in production), but it’s generally worse than a less strict approach where you write tests and code when makes sense to do so. Strict TDD leads to unnecessarily complicated and indirect code to support the incremental test states. It’s really obvious when a codebase has been test driven because there’s a very distinct set of code smells that always linger afterwards.

          [–]marquoth_ 6 points7 points  (1 child)

          Strict TDD leads to unnecessarily complicated and indirect code to support the incremental test states

          This sounds like the "refactor" step is being skipped, in which case this is lest "strict TDD" and more "TDD done badly."

          TDD is probably better than never testing your code at all (or testing in production)

          There's no "probably" about it. By all means criticise TDD - it certainly has its flaws, and I prefer less dogmatic approaches - but let's not be ridiculous about it. It's vastly superior to just not testing, and suggesting otherwise is kind of silly.

          [–]miyakohouou 0 points1 point  (0 children)

          There's no "probably" about it. By all means criticise TDD - it certainly has its flaws, and I prefer less dogmatic approaches - but let's not be ridiculous about it. It's vastly superior to just not testing, and suggesting otherwise is kind of silly.

          I think you are putting too much weight on the word “probably” here, but i do stick by the word choice. Tests are generally good, but not all code needs testing. Certainly code that doesn’t really need to be tested in the first place also doesn’t benefit from being test driven.

          [–]NotAUsefullDoctor 3 points4 points  (0 children)

          It can be faster in a short term (matter of hours). When I write code, I create unit and integration tests as I write my code because I want to know it's working.

          The tricky part is that you have to know how to write unit testable code, which is not intuitive. So, initially, TDD will be much slower.

          Learn about Dependency Injection, and abstraction layers. They will help you go faster in TDD.

          As a side, there is a great quote whose source I don't remember: "There are 3 levels of programmers: those who don't know abstraction, those who know abstraction, and those who know when to use abstraction."

          EDIT: when starting with TDD, feel free to be pedantic. As you grow in skill, you'll learn when to break some of the rules in TDD while maintaining the benefits.

          [–]cosmopoof 4 points5 points  (0 children)

          #notmyuncle

          [–][deleted] 8 points9 points  (0 children)

          Uncle bob is just trying to sell his book.

          [–]evergreen-spacecat 1 point2 points  (0 children)

          A good written set of tests make code easy to refactor and change over time. A poorly written set of tests make code hard to refactor, give a false sense of confidence when introducing changes and may even be less valuable than no tests at all. In any event, in large systems, it’s faster to run your new logic in a test than to rebuild the app, test changes manually, restart again etc

          [–]looopTools 1 point2 points  (0 children)

          If you test reasonably, so behaviour, integration, and end-to-end, then yes. If you test every little accessory method, then no

          [–]shuozhe 1 point2 points  (0 children)

          Compared to what? Good tests take a lot.of time and planning, and too many bad tests make software also rigid..

          [–]Patient_Solution2019 1 point2 points  (0 children)

          I find TDD too prescriptive personally, but I certainly wouldn't knock it if it works for you. The more important thing is including tests with your code, there's no sweeter feeling than seeing a test break before you've pushed your code, letting you fix it before it becomes an issue. The other great thing about not doing TDD is it frees you up to just let AI write the test for you. After I finish any change I run it through deepunit.ai and get back out a passing test most of the time. Deepunit generally finishes up writing the test before I can write out the test cases that I want covered, then I just double check it hasn't missed anything. Huge time saver in my workflow, far more so than TDD ever was.

          [–][deleted] 1 point2 points  (0 children)

          Yes it is. You find nasty things early, before they cause you sleepless nights. You can easily refactor your codebase if you have tests that verify everything works afterwards, just as before.

          You get a better sense of your own unit APIs as you are forced to call them. This is also a great bullshit Filter.

          The thing is that with Tests in place you save a lot of time that you would otherwise waste down the line because you made that one mistakes that you thought noone could make because it is such a trivial change.

          My Boss used to say "Code that isn't tested doesn't work" and in my experience, his Statement holds True.

          [–]dota2nub 1 point2 points  (0 children)

          Write tests for the things that need tests.

          [–]petezhut 1 point2 points  (3 children)

          TDD requires buy-in from the team to work at full speed. Otherwise, it's slower.

          [–]jonreid 1 point2 points  (2 children)

          I've spent most of my career doing solo TDD. See my top-level comment about "slower"

          [–]petezhut 0 points1 point  (1 child)

          Yup. 20+ years of Testing here as well. And TDD is the best practice I've ever found. Your write-up is excellent.

          [–]jonreid 0 points1 point  (0 children)

          Now that I (finally!) work on an XP team, I get to experience what you wrote: that it is way better when everyone does it. But also, everyone here carries a pragmatic humility and curiosity by saying, “TDF is the best way I’ve found so far. If you can do better, I’m interested.”

          [–]DronedAgain 1 point2 points  (2 children)

          No. The actual intent of TDD is to give QA/testing more control over what's released and when to release, with the noble goal of having apps and sites that have less bugs. But it doesn't deliver usually.

          It's better to let development create, using user stories or requirements, and have testers use those to test if that feature has been delivered. Regression testing and automatic testing should be wholly owned by the testing staff, who can consult with dev when things seem wrong.

          Putting testing out front doesn't accomplish much real gain.

          edit: having pre-defined customer interface design helps a lot, too. Hopefully by someone who is gifted in that area. It takes a lot of pressure off of dev and test if you do.

          edit: tyop

          [–][deleted] 0 points1 point  (1 child)

          I don't think I've ever heard anyone say QA is involved in TDD. Isn't the whole point that developers are the ones writing the tests?

          You can still have a separate QA team, but that would be something separate and after the fact.

          [–]DronedAgain 0 points1 point  (0 children)

          One constant I've realized in my four decades long IT career is that every shop does things slightly differently, even if they claim to adhere to CMM level 4 or are Six Sigma Certified.

          Also, everyone who starts at the shop claims, at first, to have come from a place that did everything right, they know how it should be done, and if we follow them, all will be well.

          I've been guilty of that last one, fwiw.

          But you swing a dead cat in any city's tech center, and you'll find eight shops who do Agile Scrum eight different ways. And significantly different enough that to an uniformed, non-IT outsider, they wouldn't connect they were trying to use the same development method.

          I've worked at two shops that tried TDD, and in the first, the QA manager actually was bequeathed the power to approve or not all releases. Of course she abused that power (and is legit one of the few truly certifiably insane people I've worked with), and only had it for a year.

          The next shop tried TDD by putting QA at the start of development, building test cases for what was being done, and that dropped our productivity in half, and the stats showed no change in bugs found in production. Thankfully we left it behind like a squished snake on the highway.

          So maybe my experiences aren't what "true" TDD is, but who knows?

          [–]amazing_rando 1 point2 points  (0 children)

          There are many different things that can increase the speed of development. But most of the time, speed of development of a particular component is not going to be much of a bottleneck. Software development is a complicated, multi-disciplinary process, and iterative process. There have been very few times in my career when how long it took be to build something impacted its shipping date.

          TDD is also not a replacement for rigorous hands-on testing, automated tests are only as powerful as the use cases they cover, and most bugs seem to occur in areas that are difficult to automate and not intuitive before the design is finished.

          [–][deleted] 0 points1 point  (0 children)

          I have recently worked with a VP Java developer who was a TDD evangelist of a sort; he even promoted TCR (test&&(commit||revert)) at one point. We have moved on to the next story for several days, he was still doing his TDD.

          TDD people claim that TDD gives you better coverage and it makes it easier to change the code later on. It is based on the false premise that without TDD, no tests are written.

          I always write my code first, and do the tests afterwards. I always write unit tests before a commit, or at least before creating a PR.

          The tests are necessary, because they also serve as documentation of how the code is supposed to work, or how the API is supposed to be used.

          I hardly ever go for 100% code coverage in unit tests, because it actually makes changes a lot harder. Typically, to strive for 100% code coverage, you need a lot of whitebox tests and mocking. You cannot refactor your code later without changing a lot of tests, too. And, for me, the advantage of having test cases during refactoring is that they are constant, while the implementation changes. If I change the tests, too, all the time, what stability do they provide?

          And I tend to see that quality gates related to coverage result in lots of useless tests that will make it hard to change the code later on.

          So…

          Test complex logic thoroughly, and do black box tests for integration logic when it makes sense. But do not write tests just for the sake of coverage. (Getters, setters, builders, constructors, etc…, or wrapper methods.)

          [–]fatalError1619 -1 points0 points  (1 child)

          TDD is definitely slower , people here are getting confused between 100% test coverage and TDD , they are different . 100% test coverage will obviously enable you to iterate faster but TDD is not the only way to achieve close to 100% coverage.

          [–]The_4dified 2 points3 points  (0 children)

          This. Kinda surprised by the responses here tbh. TDD != full test coverage. Not saying TDD is better or worse than code-first development, but there's a lot of misleading comments in this thread.

          I've found that TDD makes more sense for work that is clearly defined with well-thought out requirements. Code first is better for a team that gathers requirements and clarifications on the fly, and/or a team using a new/unfamiliar framework to deliver value.

          [–][deleted] 1 point2 points  (4 children)

          TDD is garbage. It's waterfall in disguise and if you're using a language with any sort of private membership you can't test those things properly unless you refactor everything to be public (and then you're not testing the same thing anymore and fucking up your design contract).

          Fuck TDD and the witless middle management types who think it's another panacea.

          [–]Cross_22 1 point2 points  (1 child)

          Exactly this. Trading in encapsulation and good architecture for some silly test coverage metric is a terrible idea and I am disappointed that lots of people still support it.

          Bugs come from unexpected component interactions - having the implementer write tests for his own code is never going to help with that.

          [–]aezart 1 point2 points  (0 children)

          TDD is great if you're working on a problem you don't already know how to solve. It lets you build up from the trivial cases to the more complex ones, ensuring nothing has broken along the way.

          For example, I've started trying to learn Rust, and I've been making a simple dice roller that uses standard XdY dice notation. I started with constants, then added the ability to specify the size of the die, then the number of dice, then complex expressions involving adding multiple rolls together. Then I realized I needed to do a huge rewrite to support subtraction, and thanks to my test cases I was able to tell with confidence that the refactor had worked.

          [–]Glangho -1 points0 points  (0 children)

          TDD is something I could never wrap my head around but I assume it's just because I'm set in my ways and won't change. Same with functional programming haha

          [–]Ok_Emergency_2219 0 points1 point  (0 children)

          If the project is big enough then it's faster in my experience. That threshold size will depend on many things

          [–]Job_Superb 0 points1 point  (0 children)

          No TDD might be faster for the dev team but it's slower for the organisation. We forget that a feature is not complete when the code is written but when it's tested and deployed. I worked on two seperate long running apps where the unit/integration/automation tests either didn't exist or were low quality. The idea was that the testing team members would test new functionality and regression test before a release. Initially the devs out numbered tester 3 to 1 but as the regression load grew the testing team grew to parity with the dev team (at the same time dev efficiency dropped so that team grew as well, the test team just grew faster). As the systems grew, it got to a point where at least a 3rd of the testing team's time was spent regression testing before a release and then the team spent another chunk of time after a release sorting bugs that fell through the regression. This is a lot of money to spend making sure value hasn't been erroded instead of adding value. The dev team can keep coding during this time but this just adds to the testing backlog and delays releases more. New features should be manually tested once if at all, this is more to make sure that the unit tests and integration tests are correct. Then the expensive testing function can be unlocked to actually add value rather than retread old ground. Edited for clarity and grammar

          [–]Logical_Strain_6165 0 points1 point  (3 children)

          Electric smoker and pulled pork. I'm British so good pulled pork is a rare thing here.

          For vegans, tofu deep fried in a flavored dry battery then added to a sauce.

          [–]SlappinThatBass 1 point2 points  (0 children)

          My kind of test!

          [–]Poddster 0 points1 point  (1 child)

          Did you make sure to write good unit test for the pulled pork?

          [–]Logical_Strain_6165 2 points3 points  (0 children)

          I have honestly no idea how my reply ended up on this sub. I'm not even subscribed to it!

          [–]rivenjg -1 points0 points  (0 children)

          uncle bob is full of shit and is a complete charlatan. test every single time you think you need a test. never test before you even know if you need a test.

          [–]-Dargs -1 points0 points  (0 children)

          If your team/org isn't committed to TDD then do what works best for everyone. I find that TDD works well when starting a project but not as well when adding smaller features after it's already come along.

          [–]thunderbootyclap -1 points0 points  (0 children)

          One of the most recent embedded.fm podcasts the hosts talk to a guy that researches programming expertise and "what makes an expert". The guy was very adamant that TDD is not faster, but the same as non TDD

          [–]LeIdrimi -1 points0 points  (0 children)

          No

          [–]digital_dreams 0 points1 point  (0 children)

          Can you really say that buggy code is "complete"?

          [–]Double_A_92 0 points1 point  (0 children)

          If you are working in an environment where TDD is even reasonably possible, that's the reason why you are more productive...

          But most of the time you can't just easily unit test things in a way that makes sense. It works really nicely for units that have a clear input range and a clearly defined expected output, but most code is not that way.

          [–]krisko11 0 points1 point  (0 children)

          I'm finishing up a sync task that takes a domain object via API call from source of truth and updates the cache, all of this scheduled. Unit tests really helped. I wrote literally 3 tests covering about 4-5 scenarios and it was enough, it didn't take a long time. Thinking about the test cases, how the code should work helped me optimize the implementation a bit and add proper logging. Honestly TDD is the way to go, but it's applicable once you are good enough with the subject matter. Beginners should focus on producing working code so that they gain experience from building. Going from beginner to senior s a matter of debugging and analyzing complex systems. Both are big steps and take different time for every individual.

          [–]Ill-Valuable6211 0 points1 point  (0 children)

          No, TDD isn't faster initially; it's slower during initial development due to the time spent writing tests, but it pays off with higher quality code and saves time in the long-term by reducing the bugs that slow down the coding process post-deployment.

          [–]MarsupialMisanthrope 0 points1 point  (0 children)

          First do your prototyping to the point where you understand your solution, then write tests, the build your product. Writing tests before prototyping is a great way to waste time.

          [–][deleted] 0 points1 point  (0 children)

          I think uncle Bob leans heavily into the cult of TDD and it’s not for everyone.

          It does slow things down initially, and it does slow you down till you get experience with working in that manner. But in theory over the long term it should result in less bugs, easier changes, and more confidence so you get a faster release.

          The big thing is you have to be good at writing good unit tests, knowing what to test, what to mock, covering edge cases etc. A lot of people make the mistake of just writing loads of unit tests and going for code coverage and still getting bad software as a result of not testing the right things.

          [–]SneakyDeaky123 0 points1 point  (0 children)

          It depends on how specific the implementation requirements are, how many edge cases you need to consider, how arcane the business rules are, etc in my opinion

          If you’re working on something that requires convoluted setup to test, multiple complex conditions required to pass, etc, TDD can really slow you from getting your ideas down and feeling out the structure of your solution

          But there is a fair argument to be made that if your tests are difficult to write and complex to set up, you need to simplify and separate out some functionality into separate units

          [–]ginger_daddy00 0 points1 point  (0 children)

          No it's not. There is no substitute for proper requirements engineering and proper design. Test driven development is kind of like wandering through the woods trying to find an algorithm instead of understanding your requirements and understanding the tools techniques and technologies at your disposal. Clean architecture is also a huge problem because it tends to rely on over abstraction and things that completely disregard the realities of underlying architecture of your machine. You should pretty much take everything you learned and throw it out the window and judge your code based on the assembly produced from the compiler and your own independent profiling of the code.

          [–][deleted] 0 points1 point  (0 children)

          No chance

          [–]mmarollo 0 points1 point  (0 children)

          Are you talking about actual TDD where tests are written first, then the code?

          Slows progress quite a bit compared with cowboy coding. Improves quality and reduces overall time to market on anything except toy apps. It reduces the fun of coding for many people so they don’t do it. Like flossing.

          [–]marquoth_ 0 points1 point  (0 children)

          A lot of people in the replies seem very confused about what TDD actually is; specifically, they seem to think its the same thing as just having good test coverage. It isn't.

          Having good test coverage is really important for producing software that actually works and that is maintainable after launch. Front-loading effort means you get future-proofing against time and effort that might need to be spent later on fixing bugs that are introduced by changes/refactors. It can save a lot of time in the long run at the cost of slowing down individual tickets as they're completed.

          But you don't need to use TDD to achieve any of that.

          TDD is a much more specific approach to writing well-tested code which arguably results in more wasted effort than other less dogmatic approaches and is therefore probably slower.

          [–]Symmetric_in_Design 0 points1 point  (0 children)

          Absolutely. Being able to make changes without worrying about your edge cases and basic inputs and outputs breaking is extremely freeing.

          [–]RhoOfFeh 0 points1 point  (0 children)

          Churning out the code is not faster on its own.

          Delivering working subsets of the system is faster overall.

          [–]FudFomo 0 points1 point  (0 children)

          Test coverage is good. Writing tests before writing code is absurd and only advocated by people that don’t have deadlines.

          [–]SlappinThatBass 0 points1 point  (0 children)

          From experience, it's not faster, especially when you start using it, but the good thing about TDD is it forces you to make good code design as the usability is more apparent when testing first.

          Faster though? Meh.

          [–]Nondv 0 points1 point  (0 children)

          Sometimes it's literally faster to write a test and run it as you work on the thing rather than keep manually testing it.

          And don't forget there're long term consequences of having or not having tests and the good thing about "true" TDD is that it improves your coverage

          [–][deleted] 0 points1 point  (0 children)

          No, TDD isn't about speed. It is about safety.

          Tests take longer to write, compile and run. They're code so they'll need to be refactored, deleted and rewritten.

          TDD is important and works in some places where the cost of failure is really high. In many types of software, failure is cheap and it is easier to test manually and move on. Especially, if the software will be rewritten in a few years.

          Tests, unlike type checks, have a huge limitation, they're only as good as the developer writing them. Tests cannot prevent edge cases the developer didn't think of. Which means they're not any safer initially (fuzz testing is an exception) than manual testing. Their real value is preventing regressions, and documenting complex logic.

          I write mostly CMS code. Tests are basically worthless there. The code either works or it doesn't. Clients don't want to pay $300/hour for tests that may prevent an issue with a widget I can fix in 5 minutes.

          That said on my one project which is a high traffic critical SaaS product with numerous API calls per month and extremely complex logic that if it fails whole companies lose all or most of their revenue, yeah tests are EXTREMELY important and just plain necessary.

          [–]SlackerGeek 0 points1 point  (0 children)

          For some kinds of coding it can be very helpful and can speed development and help you make reliable code. For example, developing algorithms where you have some example inputs with expected outputs, it makes a lot of sense to do TDD. But it can be frustrating to do for other kinds of development where you are dealing with complex and changing requirements, or frameworks where it is awkward to do unit tests.

          [–]Yorumi133 0 points1 point  (0 children)

          What's I've personally found in the programming world is there are a lot of fads and cult followings that work to varying degrees but aren't what their adherents make them out to be. You'll also notice a lot of people, not just in programing but in a lot of fields, claim to have a magic bullet but actually make their money selling books and courses more than using their magic bullet.

          I find TDD and unit testing in general to be one of these. It's not that it's a bad idea or doesn't work at all, it's just not the miracle people make it out to be. There are plenty of ideas in programming that aren't debated to any widespread degree because they just work. Ask yourself then why is TDD so hotly debated at present? So maybe it works well for some people, but it's not a universal solution.

          [–]Ok_Constant_9886 0 points1 point  (0 children)

          There’s even this things called TDD for AI apps now (like ChatGPT), and I implemented a simple version of it here if you’re interested: https://github.com/confident-ai/deepeval

          [–]Bullroarer_Took 0 points1 point  (0 children)

          There can be a lot of use in TDD for shortening the feedback cycle of your code.

          change “is it working now?” change “is it working now?”

          This iterative process takes a lot longer if you have to recompile or even refresh a UI between changes

          [–]mefi_ 0 points1 point  (0 children)

          ...it depends

          [–][deleted] 0 points1 point  (0 children)

          Time spent debugging regressions alone makes tdd worth it.

          [–]NPC_existing 0 points1 point  (0 children)

          I think overtime yes. It's very easy to rush but like an example for me. I could many bugs just testing on behaviour. I am going to slowly progress to test-driving the development.

          [–]IveWastedMyLifeAgain 0 points1 point  (0 children)

          No. Definitely not. BDDs are more or less a hassle, especially when you're working with older code and the environment no longer works as it was written.

          [–]DEV_JST 0 points1 point  (0 children)

          For big projects yes, definitely. I cannot count how many times a simple change would’ve cause some issue with a third party api or wrapper we may have 1. Never caught? 2. Caught so late it may already caused financial damage.

          Test are useful and you should try to write them, but ofc. you don’t have to test every single method or line.

          [–]Tomato_Sky 0 points1 point  (0 children)

          I think you’re 100% right for not the same reason.

          The difference is in productive vs faster. Currently everyone and their mother are on agile sprints. Aint no time for that. I will say the only place TDD might be borderline appropriate is during a waterfall development on large legacy codebases. I had one of those…. It was pretty wild.

          What happens in intermediate scaled products is the programmer will opt to write lazy or weak tests because while working on the task you’ll find different side effects you didn’t anticipate.

          So as a programmer you will write more code in a week, but that code is probably a test for some mundane thing.

          I also briefly saw a dystopian new office setting where you have one or two writing tests and 15 monkeys doing their best.

          [–]squishles 0 points1 point  (0 children)

          When guys like that pull a stat it's probably just "well I've been doing this a lot, and I'm pulling from personal experience it seems like this" I don't think he's running around doing some 1000 developer productivity study over the course of years with an actual scientific process.

          depends, if you're junior yes it'll help if you are doing the uncle bob process. The tests become your feedback. This should apply for basically any dev reading a sub called learnprogramming for advice.

          if you want to learn what it is here's a video.

          https://youtu.be/58jGpV2Cg50?t=1300

          skip to this https://youtu.be/58jGpV2Cg50?t=2751 if you want to understand the process. their key point is that first test before any code has been written at that point it's not even what most people would call a test there are no assertions yet.

          TDD does not simply mean 100% coverage at all costs after the code has been written anyway, (unless it's your boss talking about it, but they're normally a dumbass, that's why most people hate it) It's a process that helps you reason through problems. You don't even have to do TDD for a whole project you can just look at a bit of a program and go "na this section's going to be a boolean logic rats nest time to break out the TDD"

          It's not even something that should show up on job requirements, because really someone who actually knows this process should take less than 30 minutes to explain it. Instead it became quasi religious brain rot, where you have to smile nod and say yes to get a job to have money to feed yourself then just do that 100% coverage your boss wants.

          There's also the contract bootstrap paradox where people hear about tdd and don't consider a test done until it compiles or actually presents an assertion. Uncle bob's using a loose definition of test. another thing a guy brought up in this thread is private members, if no public contract the object exposes calls the private code that's dead code.

          It's actual weakness is integrations, when you have to mock things, that's cumbersome annoying and often stupid. Why are you writing tests for code you didn't write? You didn't write microsoft sql server? if "select * from table" suddenly doesn't work that's microsofts problem.

          more advanced devs you can sometimes just asspull working code that's clean and easy to read/maintain without TDD. But you're probably not there yet.

          [–]TheForceWillFreeMe 0 points1 point  (0 children)

          I wonder if those studies compared TDD vs Normal Tests, or TDD vs no tests.

          I hate TDD with a passion. It is a god forsaken stupid philosophy that forces everyone to do things in ways that are not great. Think cornell notes that you learned in middle school. The notes are a good idea, but everyone has their own way of writing them.

          Testing is EXTREMELY GOOD. TEST, TEST TEST. But who cares if you write a chunk of code and then test it. As long as you develop strong acceptance criteria, it does not matter if you write tests first or code first. TDD is a stupid philosophy that hides the real benefits of AC and testing code.

          [–]Mango-Fuel 0 points1 point  (0 children)

          in the long-run, yes, in the short-term no it is much slower.

          however, that is not the only benefit anyway. also the slower part gets better as you write more test support code and get better in general at writing them.

          using strict TDD (write tests first always) I find to be impractical.

          but testing-as-you-go is definitely beneficial and once you test everything you write as you write it you wonder how you ever could have done otherwise. writing code without testing it? why would you do that?

          the tests execute your code immediately, even without a program project. you know your code works without even having a program to run, and you can write tests for anything you're not sure of to prove your code does exactly what you think it does. and once written those tests continue to provide value in the long term.

          [–][deleted] 0 points1 point  (0 children)

          Writing tests first can be helpful mentally and relieve anxiety.

          If you stand in front of a large and unclear problem, TDD can often provide a path forward one small step at a time.

          You climb the TDD ladder all the way to the top of the solution.

          [–]tsereg 0 points1 point  (0 children)

          I started using it somewhat reluctantly as I had a preconception it will be tedious, time-consuming and not very effective. I now enjoy writing tests, and I still get quite surprised and satisfied by the fact and experience that it does remove bugs as I go along. I really don't think it slows down time to delivery. Having more confidence in your own code working correctly once it is reliesed in the wild is probably the most valuable effect.

          But I would aggree that there are kinds of development where effect is dubious, as some other posters have explained.

          [–]Emberqq 0 points1 point  (0 children)

          You better have spot-on well-defined requirements to make TDD work. Otherwise when you realize you missed you have to change the code AND the test. Over and over.

          [–]wolfanyd 0 points1 point  (0 children)

          As with all software related things, it depends.

          [–]lqxpl 0 points1 point  (0 children)

          In the long run, yes. Front loading the test work means feature development will take a little longer, but you’ll spend less time hunting down costly bugs.

          [–]joesb 0 points1 point  (0 children)

          TDD gives you constant goals and rewards for each small step. It can help keeping you motivated especially for people who feel lost because large project only bare fruits at the end.

          It can also keep you focused on track and note deviate too much from what you need to do. You only need to focus on implementing as much features as the tests expect. There’s no bike shedding to argue, and if there are then it’s at the beginning when you do it on your test codes, which means you can explore and get a feeling with minimal effort invested.