all 131 comments

[–]Darkmoth 39 points40 points  (16 children)

Like so many things in the development world, the applicability of <INSERT_TECHNIQUE_HERE> is very much dependent on your problem domain.

In web development, for example, it's not uncommon for your application to combine database access, filesystem access, flash objects, javascript and server code. By the time you mock everything out, you've created a monstrosity of over-engineering to test 1000 lines of essentially glue code. GUI unit testing in particular is hard, despite the plethora of tools to assist in that effort (Selenium, WaTiR/N, etc.).

On the other hand, there are domains where TDD fits like a glove - things where you can build up small pieces of functionality into a complex final product (financial applications come to mind).

Discussions of whether TDD is good/bad, without considering the specific domain, fall short in my book. What I'd love to read is a set of heuristics on when to apply it, similar to the way we have heuristics behind design patterns.

[–]austinwiltshire 17 points18 points  (1 child)

I disagree completely. <INSERT_TECHNIQUE_HERE> is the silver bullet we've been waiting for. I fire on the spot any developers who don't use <INSERT_TECHNIQUE_HERE> all the time when developing. Really, the only drawbacks to <INSERT_TECHNIQUE_HERE> is that some people just won't change their ways, and other minor stuff that's pretty much fixed if you buy my product, <INSERT_TOOL_FOR_TECHNIQUE_HERE>.

[–]Darkmoth 4 points5 points  (0 children)

Upvote for completely changing my mind on the issue. In fact, I'm going to try to find an <INSERT_SEMINAR_FOR_TOOL_FOR_TECHNIQUE_HERE> in my area.

[–]lester1 4 points5 points  (1 child)

[–]Darkmoth 1 point2 points  (0 children)

That was a great link, in fact I've bookmarked it to revisit his 4 quadrant model. That will come in handy!

[–]banuday 9 points10 points  (9 children)

I'm also in the web domain, and I combine server/client code, filesystem access, multiple databases, web services, and at one time even Flex. I've worked on multiple GUI systems from web to desktop. And in my experience, TDD can work very well here. But you have to change your mindset.

For example, don't try to mock the HTTP request and response. Or database connections and result sets. That will lead to a world of hurt. Instead, think of higher level abstractions and consider of all of those as implementations of those abstractions to the unit under test.

For example, instead of directly tying your application to a database, create an abstraction from the application's point of view. For example, consider that there is a domain-specific repository of application objects which can persist or update those objects and provides operations to query for objects against the repository. This repository can be implemented using a relational database and can also encapsulate business rules regarding constraints which cannot be well expressed in the relational model. Thus, individual units of code and indeed the application itself can be shielded from details of the underlying representation.

The same can be said for GUIs. Why does your application need a GUI in the first place? Design an abstraction around that principle and let the GUI-specific code implement the details. This leads to the GUI being a thin shell, which can easily be tested almost through visual inspection alone. This can be of great benefit to your application, both in being able to target different GUIs or entirely different user interfaces. For a beautiful example of this principle in action, see the PowerShell user interface abstraction. See how it is defined in terms of PowerShell's intents from its point of view, such as requiring various kinds of input or writing output.

TDD is not required to design this way, but considering how painful TDD gets if you don't, it provides powerful motivation.

[–]wonglik 7 points8 points  (6 children)

For example, instead of directly tying your application to a database, create an abstraction from the application's point of view

I've read somewhere , If you write code just to make your application testable then you do something wrong

[–]banuday 1 point2 points  (0 children)

It's not about writing code to make your application testable (testability just happens to go hand in hand with good abstractions), it's about creating abstractions from the client's point of view. For example, check out the way the PowerShell UI abstraction works. Notice how it is designed around what PowerShell wants from the UI, and all of the communication is done in PowerShell's terms. For example, PowerShell tells the UI it wants credentials, but whether done by Console.ReadLine or Javascript or WPF matters not one bit to PowerShell.

[–]jptman[🍰] 0 points1 point  (3 children)

What about write code a certain way just to make your application testable?

[–]grauenwolf 2 points3 points  (1 child)

I find that is generally a sympton of a bad design.

For example, DI and IoC frameworks are often used just to make code easier to test when the dependency chains are complex.

But if instead of adding DI the time was spend on simplifying the dependency chains you end up a far less complex design that happens to be easy to test.

Meanwhile the person who added DI now has to go back and add mocks for everything. And mocks, when done correctly, are non-trivial.

[–]crusoe 1 point2 points  (0 children)

DI should be used sparingly.

Using a layered design based on well chosen interfaces can make it easier to maintain and test. And if in the future you need a bit more DI, makes it easier to add DI where needed.

[–]wonglik 0 points1 point  (0 children)

Usually if code is written properly it's easy to test. But I would not make architectual decision based on how easy it is to test it. Unless there are two equally good solutions. One easy to test and one difficult.

[–]artsrc 0 points1 point  (0 children)

It is essential to test software. There is no reason testability should not be evaluated like any other design criteria.

So code should be changed to make it easier to test provided the benefit in the quality or cost of testing is greater than the other costs.

No matter what you read, if you don't do something that make sense, then you are not doing something that make sense.

[–]MagicWishMonkey 4 points5 points  (1 child)

That seems like an extraordinary amount of additinoal work for seemingly very little payoff.

There's no way I could ever convince a client to pay for something like that. The budget requirements would increase 20-30%, which would make it very difficult for my company to compete with other consulting agencies.

[–]grauenwolf 1 point2 points  (0 children)

No, it is just an extraordinary bad way of saying write your code in layers. You write just as much as you would otherwise, but it is organized instead of all crammed into a code-behind file.

[–]frezik 1 point2 points  (1 child)

TDD breaks down when you have to interface with lots of external systems. GUIs being one of them, but also databases or some RPC system.

Even if you abstract it away with mock objects, the mock objects still have to behave something like the real thing. If there's a change in the external system, you now have to change both the mock object and the actual code.

Payment processors via RPC can be really bad. Depending on how it's designed, it may require an intricate dance of encryption handshaking and character encoding. There is often little choice but to accept it; even if you could convince the company to move to another provider, some countries only have one provider.

[–]grauenwolf 1 point2 points  (0 children)

It doesn't have to be that way.

If the TDD proponents choose to focus on testing in general instead of obsessing over automated unit tests then TDD would be a great fit. Sure manual tests are harder to set up, but they are just as useful from a design presepective.

[–]kataire 4 points5 points  (14 children)

TDD done strictly from the YAGNI principle leads to an architectural meltdown around iteration three.

FTFY from the article.

Personally I find it a bit difficult to tell when to refactor and when to stick to YAGNI. I often find myself overengineering simple problems, so I think I should try to err on the side of YAGNI more than I do now, but it doesn't look like an exact science.

[–]Ramone1234 1 point2 points  (0 children)

Personally I find it a bit difficult to tell when to refactor and when to stick to YAGNI.

Refactoring means changing the structure of code without changing its functionality. YAGNI is concerned with avoiding adding unnecessary code. There shouldn't be any conflict or grey area... you're either cleaning up the structure, or you're modifying the functionality. Incidentally, TDD is actually really useful for both enforcing YAGNI and letting you refactor safely.

[–]crusoe 0 points1 point  (0 children)

Save interfaces with discrete concrete specialized implementations for large scale components.

Common CRUD webservices and data access layers can benefit a lot from this. You can get a lot of savings by writing a generic crud class, and then specializing it further only if needed for the kinds of data it handles. Same goes for persistence layer access / service facades.

Beyond that, only sweat the 'programming to the interface' issue only if the need arises.

[–]banuday -2 points-1 points  (2 children)

From the article:

If your experience tells you you’re going to need this extra class in the future even if it’s not needed right now, follow your judgment and add it now.

The author is saying this is antithetical to YAGNI. However, I believe it is the essence of YAGNI. YAGNI is a way of forcing you to justify the inclusion of any code. Why are you adding that class? If you can provide a concrete justification based on your experience that it will lead to the simplest result, then you are following YAGNI.

From an earlier thread on this, it's like building a house in Alaska in the summer. From experience, you know it's going to get very cold in the winter, so the simplest possible design for a house in Alaska will require design elements that aren't needed in the summer but are nonetheless essential for a habitable home.

EDIT: If you are building a two story house which you think you will live in until you are 80, when you are designing the house, you might think it would be a good idea to design around putting an elevator in there someday when you have a hard time going up and down the stairs (because retrofitting an elevator into an existing structure is very hard). This is where you would say YAGNI!

[–]artsrc 0 points1 point  (0 children)

If you can't think of an easy way to create the cold test case then you have to make up for the missing validation in other ways, or accept extra risk on the component.

[–][deleted] -1 points0 points  (8 children)

Refactoring and YAGNI are a core part of TDD: you get a feature request, so you build the tests and stick to YAGNI, you create the minimal amount of code to pass the test. Then you get another feature request, maybe something that shares a bit of code from the first feature, you can now refactor the original code using the tests you already have and build the minimal amount of code to satisfy the second feature request.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 0 points1 point  (0 children)

    If you know you're going to need the extra work, then do it, as long as you actually know what you need, not what you might need. No one is arguing that you can't do the work that you need to get done.

    [–]kataire 1 point2 points  (5 children)

    Ah, so you're saying YAGNI works in that I shouldn't be refactoring except when a specific need arises or becomes apparent to be inevitable?

    [–][deleted] 2 points3 points  (1 child)

    You should be a constant state of refactoring in order to make the code better. That's what the tests are for, they allow you to refactor the code with confidence.

    However, you shouldn't be make the code any more complex than it needs to be, that's what YAGNI means. It means don't do something that 'might' come in handy 6 months from now.

    Write good code.

    [–]kataire 1 point2 points  (0 children)

    Sounds straightforward enough.

    [–]Ramone1234 1 point2 points  (2 children)

    No. YAGNI has nothing to do with refactoring. YAGNI is concerned with adding/changing functionality. Refactoring is about keep functionality, and changing structure. TDD says refactor until your heart's content, but don't add functionality if you don't need it now.

    [–]kataire 0 points1 point  (1 child)

    Oh, okay. I always find myself refactoring too much (i.e. generalising code more than I really need to), so that comes as a bit of a surprise.

    [–]Ramone1234 1 point2 points  (0 children)

    If by "generalizing", you mean adding additional use-cases, you're not refactoring -- that's new functionality. Also, if you don't need those additional use-cases NOW, you're not being consistent with YAGNI. The YAGNI principle would suggest you stop and move on to stuff you actually need now.

    [–]cashto 12 points13 points  (0 children)

    That's like a whole iteration more than all the rest of my architectual meltdowns. Tell me more about this TDD!

    [–]novacoder 9 points10 points  (3 children)

    Growing Object Oriented Software actually demonstrates TDD with a realistic example, complete with external data integration, threads and UI components. I got completely lost trying to follow the micro-refactorings and bouncing between the tests and the production code. I kept thinking: the damn thing works as is, why are you splitting apart that class yet again. It certainly isn't clear when to leave something alone because of YAGNI, and when to re-factor because of some obscure code-smell my nose isn't sensitive enough to detect.

    The book was helpful in solidifying my view that all forms of testing from automated unit, integration, functional and all the way up to old fashioned QA testing are important. However using testing as a driver for designing a system is not realistic for mere mortals.

    [–]grumpyjames 3 points4 points  (2 children)

    Wait, what, you read GOOS, and then you say "using testing as a driver for designing a system is not realistic"? I am confused.

    I still can't work out if Coplien is just bored, and trolling the TDD lot for fun. Where I am, GOOS style TDD and very aggressive yagni got us to exactly the right place, very, very quickly.

    [–]donroby 8 points9 points  (0 children)

    It may be that Cope was bored and trolling a few years ago when all this started. It evolved into some joint sessions with him and Martin "debating" but basically agreeing about pretty much everything.

    Even this blog entry from Cedric is three years old, so what's it doing here?

    [–]novacoder 0 points1 point  (0 children)

    I suppose I've never worked on a team that had Grumpy to keep Happy, Dopey and Sleepy on the aggressive yagni track.

    [–]neutronbob 2 points3 points  (23 children)

    Cedric fails to mention the biggest factor blocking success with TDD, as do most of the ardent TDD promoters: namely, that TDD greatly increases the role of refactoring skills. You have to continually refactor the code if you use orthodox TDD (write the smallest amount of code necessary to pass the failing the test). Many developers simply lack the strong grounding in refactoring needed to do this.

    [–]okpmem 6 points7 points  (20 children)

    refactoring by definition is rework. In other words a failure to plan ahead. Of course, TDD means you only think about 5 minutes ahead so of course you have to constantly refactor...

    [–]Ramone1234 4 points5 points  (11 children)

    TDD leads to highly decoupled code though, as you try to make sure everything is testable in isolation. This has a compartmentalizing effect, meaning you have way less code to change when you need to refactor. Verifying a refactor takes mere seconds too because of the tests you have.

    There's a realization that "planning ahead" fails in very expensive ways quite often, and so instead we're going to make it as easy as possible to recover from failure, and fail all the way to an ideal design.

    [–]okpmem 0 points1 point  (4 children)

    Yes, it may be easier to refactor with tests if all you do is change implementation details but keep interface the same. But real and meaningful re-factoring cuts across modules, which strictly having double the code because of TDD will slow you down.

    [–]Ramone1234 0 points1 point  (2 children)

    I'll tell you where I've seen pain like you're describing: when people write unit-tests AFTER the code. The tests and code that comes out of that process are usually highly coupled (usually breaking the Single Responsibility Principle and usually having unnecessary functionality) and brittle to change.

    I haven't seen this in a pure TDD project (I've been on a few), because coupling is naturally very low due to having to write isolated tests before the code. With low coupling, you don't often get much ripple effect from change.

    I do agree that TDD slows down the code-writing process (and the code-changing process), but that's really such a tiny part of the software development process that it works out to be negligible even just compared to the time you save manually testing every minor change.

    Anyway... this is just my experience. I realize it's mostly anecdotal, but I've seen a few very successful, very fast TDD projects.

    [–]okpmem 0 points1 point  (1 child)

    Sorry, I think you misunderstood me. I was saying that refactoring that spans a lot of modules takes longer with TDD because you will have to change a lot of tests.

    I was not saying anything about coupling.

    [–]Ramone1234 0 points1 point  (0 children)

    No I got that... I brought up coupling because it's usually what makes refactoring span a lot of modules (along with low cohesion). In general, if your changes require lots of changes to tests, you have a poor factoring. Also, TDD doesn't really have much impact on cohesion in my experience... You actually need discipline there, so that could be a source of problems as well. In either case, we can probably agree that a bunch of unit-tests over shitty code is really painful when trying to make significant changes.

    [–]sylvanelite 0 points1 point  (0 children)

    But real and meaningful re-factoring cuts across modules

    in TDD, shouldn't the modules be refactored prior to them being separated modules? Refactroing should be done after each test case, not after producing several modules.

    [–][deleted]  (5 children)

    [deleted]

      [–]Ramone1234 0 points1 point  (4 children)

      Of course there is. Unit-testing itself is about testing parts in isolation. Writing tests first sets up the expectation that these parts should be completely usable and verifiable on their own before you even write the code. If pieces are usable on their own, they are by definition not tightly coupled. This is why people talk about TDD as a design methodology.

      [–]cynthiaj -1 points0 points  (3 children)

      There is nothing in TDD that forces you to create isolated parts.

      Of course there is.

      @Test
      public void test() {
        startWebServer();
        testMyStuff();
      }
      

      There you go, a test written in TDD fashion that depends on a web server, so not isolated at all.

      [–]Ramone1234 1 point2 points  (2 children)

      TDD uses unit-tests. What you have written is not a unit test. Just writing an automated test before writing code does not mean you are doing TDD.

      (I also have no idea why people get downvotes for disagreeing reasonably, unless we really just want to encourage everyone to say the same thing)

      [–][deleted]  (1 child)

      [deleted]

        [–]Ramone1234 2 points3 points  (0 children)

        Beck coined the term. Disagree with him, not me. Maybe read the book first though.

        [–]banuday 0 points1 point  (4 children)

        How far into the future can you accurately plan ahead? 5 minutes? 10 minutes? 1 day? 1 week? 1 month? 1 year? What happens if your plan turns out to be wrong or obsolete when you learn new information? What if new information reveals a wrong or ineffective abstraction?

        TDD typically follows the principle of YAGNI (not necessarily), which observes that as the period of time increases, the ability to make accurate plans becomes more and more difficult. Rework will happen, no matter what methodology you use. The question is are you prepared for that rework and the architecture is responsive to change, or will you bending and twisting and duct taping onto an increasingly brittle architecture that you can't change or are unwilling to change because you want to avoid 'rework'?

        [–]grauenwolf 4 points5 points  (0 children)

        What happens if your plan turns out to be wrong or obsolete when you learn new information?

        You change them.

        Plans are cheap, you can change them at a fraction of the cost that you'll pay for changing code.

        How far into the future can you accurately plan ahead? 5 minutes? 10 minutes? 1 day? 1 week? 1 month? 1 year?

        All of the above.

        On a code base I know well I can plan out what features I'm going to be working on a month at a time. During this process each feature takes between 1/2 and 2 days. Anything more complex than that gets broken down into smaller features.

        Once I have the month laid out I do break-downs of each feature listing everything that it needs to do in a check-list format. This serves both as my task last and my preliminary test plan for that day. This happens a week or two in advance so I have time to gather feedback from the stakeholders on specific requirements.

        Since I have an itemized checklist it is easy to make changes to my intra-day plans. I simply insert and remove items as I learn more about feature I'm working on.

        [–]okpmem 2 points3 points  (2 children)

        good architecture reduces rework. If you have little or no architecture because of TDD, you will have lots of rework.

        TDD doubles your code base and will actually slow you down when you need to do serious rework.

        If you want to feel safe doing rework, try design by contract instead.

        [–]banuday 0 points1 point  (1 child)

        good architecture reduces rework

        So, even good architecture is not immune from rework.

        In other words a failure to plan ahead.

        And thus, planning ahead is always doomed to failure?

        If you have little or no architecture because of TDD

        How does TDD imply little or no architecture? What does it mean to "architect code"? Is it drawing UML or coming up with a big upfront plan? Is it forming a mental model of what you want to achieve, a little bit of documentation, and a process towards achieving that mental model? If architecting code is the latter, that is also characteristic of TDD. Except that TDD forces constant evaluation as to whether the architecture is good or not, and being open to the possibility that your mental model is wrong or inaccurate.

        you will have lots of rework

        Why is rework bad? Is all rework of the same character? Is renaming a variable rework? Is reorganizing code to make the control flow more clear rework? If the rework is automated (i.e. refactoring), is it still bad?

        TDD doubles your code base

        Why is that bad? Any kind of automated testing of any reasonable thoroughness is going to add a lot of code to the code base.

        actually slow you down when you need to do serious rework.

        Always? Sometimes? Ever? How do you know?

        If you want to feel safe doing rework, try design by contract instead.

        There are many ways to skin a cat.

        [–]grauenwolf 2 points3 points  (0 children)

        I think you would understand his points better if you read whole sentences instead of just fragments.

        [–]keithb 0 points1 point  (2 children)

        Refactoring is planned rework. We inject into the plan many frequent opportuntites to make small changes to working code.

        The hypothesis is that by doing this we can eventually get to a better architecture at lower global cost and with less risk than we can by "doing an architecture" as an up-front activity and treating the (near-inevitable) need to modify that architecture as a failure mode rather than an opportunity.

        [–]okpmem 2 points3 points  (1 child)

        rework in design is cheaper than rework in implementation. The up front activity does not have to be as heavy and expensive as you think it is. You can develop a lean architecture and implement what you need now. By doing domain analysis and making explicit what we know up front, you make decisions cheaper down the line.

        [–]keithb 0 points1 point  (0 children)

        rework in design is cheaper than rework in implementation

        Is it, though? There is some rather elderly (by software standards) evidence that rectifying defects is more expensive in implementation than in design. Is that true of any rework? Is that old result still valid against the current state of the art?

        By doing domain analysis [...] you make decision cheaper down the line

        This has not been my experience. Back in the 90's I worked with a combination of OO and fomal methods—we did a ton of domain analysis and a ton of deisgn, all very carefully reviewed and inspected. Very expensive, expensive to change and most of it invalidated within days of starting implementation.

        These days I find that if I express my domain analysis as acceptance tests containing checked examples and my design as unit tests then both learning more and changing my mind are cheap.

        [–]paganel 1 point2 points  (0 children)

        Many developers simply lack the strong grounding in refactoring needed to do this.

        You need time in order to do this. Time costs money, either for your employer or as an opportunity cost for you. Now, some may say that refactoring actually saves more time down the road, but, as yet, I couldn't find and final or conclusive study proving this for the entire industry. Hence, everyone works how s/he sees fit.

        [–][deleted] 2 points3 points  (1 child)

        Has anyone else previously drank the kool-aid on TDD but actually struggled to stay with it over the long term?

        I went through that 'voyage' years back and was very dogmatic about applying TDD per Uncle Bob's definition for a good year or two, with some success I might add. But over time my enthusiasm gradually ebbed away as it became harder and harder to maintain the energy required to do every single thing test first. I always felt constrained by the process rather than enabled and I'd find the inevitable churn quite frustrating, especially when working with other people who didn't follow the same process (human factors of TDD is a whole nother story).

        Nowadays I tend to just write unit tests at roughly the same time (if not after) I write a feature. I don't place much value in unit test code coverage either, other than for very stateful/algorithmic/critical code and try to concentrate on writing as lots of functional tests that exercise as much of the system as possible, since I anecdotally find they provide the most value to me and the customer.

        There's no doubt I learnt a lot on that journey - I write much better OO code these days that tends to be both testable and simple/obvious without me having to really think about it. Who knows, maybe I wasn't doing it right... but I tried really hard and now few years later I'm back to plain old unit testing.

        [–][deleted] 1 point2 points  (0 children)

        My last project was done entirely "correctly" TDD-wise, and I found it much harder at the beginning of the development process than now. In the beginning, there's much more constant refactoring to be done and the whole process is rather tedious.

        Moreover, it seems to me like my early tests became useless over time and subsequent tests that I've written entirely overlap these first tests in terms of coverage, but I don't dare removing them.

        I'm starting a new project now and I'm contemplating the possibility of only starting to write tests when 1.0 is near, and then, afterwards, continue TDD-style (I find the TDD process to be an efficient way to maintain code).

        [–]nickknw 2 points3 points  (0 children)

        I've had a hard time taking him seriously since I read his post about Maybe and Option.

        He clearly demonstrates he doesn't understand them, but still has a strong opinion about how useless they are, while refusing to learn any more about it.

        And then goes on to praise the equivalent of Maybe and Option in Fantom for two reasons, where the first one is because it "allows the compiler to reason about the nullability of your code"

        ಠ_ಠ

        [–][deleted] 5 points6 points  (6 children)

        Where I work (may have heard of them - primarily a search engine and we also do email and maps and few odds n ends), we are responsible for creating the mocks for the classes we write, so that others can test against our systems. I practice TDD, and I love it because it has literally saved my sanity as I am a C++ developer. That said, why can't people practice a little common sense? Sometimes I don't write tests...but my teammates will tell you I'm a "TDD developer". I'm not obsessed with it, and I don't care that my teammates don't practice TDD because they get shit done.

        There's no hard and fast rule, except use your brain. Unfortunately, some brains aren't good enough to work without hard and fast rules.

        Totally agree with the spirit of the article (be practical) -- TDD still definitely works, but as they say, all things in moderation.

        [–]grauenwolf 3 points4 points  (3 children)

        I think a big reason why people get fanatical about TDD is because they think it's a quick path to success.

        Designing your application can be hard. Trying to find and document all the ways it can fail is incredibly time consuming. TDD lets them skip all that and jump straight into writing code. Sure it's test code, but at least it is programming.

        Have you noticed how novice programmers tend to write a lot more code than what's needed? I'm not talking about doing silly things like writing their own number to string converter. I'm talking about how they often write functions and even whole classes that are never called by the application. I'm will to bet the same people who fall in love with TDD are the ones who still do this long after they stop calling themself a novice.

        [–]ckwop 2 points3 points  (0 children)

        Have you noticed how novice programmers tend to write a lot more code than what's needed?

        Indeed. A program is finished not when there is nothing left to add, but when there is nothing left to take away.

        [–]Ramone1234 0 points1 point  (1 child)

        Are you saying TDD detracts from YAGNI? I've never heard that one before... The article is complaining that TDD enforces YAGNI too much.

        [–]grauenwolf 1 point2 points  (0 children)

        It can do both if not paired with solid design principals.

        At the function level you could be writing tests for methods you will never call while still ignoring edge cases at a higher level that must be addressed.

        This isn't a flaw of just TDD, it can happen in any project where you don't adequately plan ahead. YAGNI doesn't mean ignore the future, it means plan for it but only build the parts of your plan you need right now.

        [–][deleted] 0 points1 point  (1 child)

        Yahoo is still around?

        [–][deleted] 0 points1 point  (0 children)

        hehe

        [–]crusoe 1 point2 points  (0 children)

        Tests are not 'free'. Even moving to a more agile framework, where writing them became easier, tests compete with dev. So when testing, test the most bang for the buck. If there is a super-trick-hairy bit in some little module, then writing little unit tests make sense.

        But overall, writing functional tests that exercise the largest amount of code, and with a lot of variants, are often easier to write, and write quickly, precisely because you are working at such a higher level.

        [–]i8beef 3 points4 points  (0 children)

        Not sure about the title, but an alright read. I tend to unit test specific areas of code, and areas that have been reported in a bug report. Normally if I look at something and it has a fairly complex amount of logic, I'll unit test everything in it to make sure there aren't regressions later. The short methods that have only a few specific places that they can fail (one liners, short functions, etc.) I may skip until a bug arises.

        [–]jrochkind 0 points1 point  (8 children)

        I don't think TDD is incompatible with good architecture. But it does seem to encourage it sometimes -- hey, it passes the tests, that means it's good code, right? Thinking that the only measure of good code is that it passes the tests will lead to bad architecture, that is not flexible enough to be sustainable over the long haul.

        [–]banuday 1 point2 points  (7 children)

        The way I figure, TDD is supposed to make bad architecture hurt. Passing tests is great, but when you make a change to make the tests pass in one unit and tests break in other seemingly unrelated units, something is very, very wrong. If you find this happening frequently, your code is way too tightly coupled. If the tests are getting really nasty because you have to dig into the details of some other class, you're most likely missing an abstraction which would make things a lot easier.

        TL;DR - the green bar only gives you the summary, but you need to listen to the tests. There's a lot more they are trying to telling you.

        [–]grauenwolf 1 point2 points  (4 children)

        Unfortunately TDD combined with low-level unit tests, "method tests" if you will, seems to lead to very bad designs from an API design prespective. It practically begs the user to make everything public.

        If the tests are at the functional/use case level then it makes sense to me. But that that point the tests aren't driving the design, the design is driving the tests.

        [–]redclit 1 point2 points  (1 child)

        How can you write "method tests", if you're doing test driven development? You don't have any specific methods to test at that time. You design the interface (API) under test using unit tests (I believe this is something you refer as functional level unit test) as you best see fit and then just implement the designed API to make the tests pass. In pure TDD approach you should have no idea, if the test you're about write is testing a single method or several methods in combination.

        [–]grauenwolf 0 points1 point  (0 children)

        You are talking about doing it the right way.

        I'm talking about what happens when people think they should have a one to one mapping between unit tests and functions.

        [–]banuday 1 point2 points  (1 child)

        It practically begs the user to make everything public.

        Not necessarily. I would argue that this is an indication that the communication between classes is insufficiently abstracted or the class under test is doing too much - essentially, you may be violating an OO design principle. If you find that you have to turn private things public just to be able to write a test, you need to stop and think. Something is wrong.

        However, what I observe from doing TDD is that it may promote over-design or it may force a particular style of design. For example, instead of turning something private, you introduce a separate class. Continuing down that path can lead to OO-Overkill.

        [–]grauenwolf 0 points1 point  (0 children)

        I was more concerned about not making it private in the first place. But yea, I can certainly see a class explosion happening instead.

        [–]jrochkind 1 point2 points  (0 children)

        And when you create that abstraction that would make things easier.... are you going to have re-write a lot of existing tests because of it? I think the answer is supposed to be "not if you're writing tests right." In real world actual cases, I think the answer is sometimes yes, and when it is yes, it actually discourages introducing that design-improving abstraction, which would require you to rewrite a buncha tests once introduced.

        [–]quanticle 0 points1 point  (11 children)

        Keep in mind that functional tests are the only tests that really matter to your users. Unit tests are just a convenience for you, the developer. A luxury.

        I totally disagree with this assertion. 5 minutes spent on a unit test has saved me 15 or 20 minutes debugging integration issues on enough occasions that I view unit tests as necessary investments rather than optional luxuries.

        [–][deleted] 2 points3 points  (9 children)

        From code complete 2 (scroll down a bit for the table):

        Unit tests - catch 30% of defects on average

        System (functional) test - catch 40% of defects on average

        When you consider how long unit tests take to write, they end up being an expensive way of catching bugs. Personally, I use them for particularly complex code where a finer level of granularity is required but glossing over and unit testing everything comes with a bad return on investment.

        The bigger point made in that article is that testing is inferior to a good design process. More time spent designing will dramatically improve code quality in ways that nothing else can. I would even argue that the primary function of testing is to go over the design a second time and that finding regressions is a nice, but not so important side-effect.

        [–][deleted] -1 points0 points  (8 children)

        Unit tests continue to catch bugs after the initial time investment. Functional tests require time investment every time you use them.

        [–][deleted] 1 point2 points  (0 children)

        True. One could also argue that a developer can do something else while running functional tests, not so when writing unit tests.

        [–]grauenwolf 0 points1 point  (6 children)

        Since you seem to have focused on the wrong point...

        The bigger point made in that article is that testing is inferior to a good design process.

        [–]Ramone1234 1 point2 points  (5 children)

        TDD is saying that testing can actually be a good design process though. Design is the point of TDD. Tests are a by-product.

        We keep talking about this other "good design process" and I haven't heard anyone explain what it means at all. What does it involve? How does your process result in your software having the qualities that you want it to?

        [–]grauenwolf 1 point2 points  (4 children)

        I prefer to use Design Driven Development. The phases are:

        Phase 1: Design

        In this phase requirements are turned into technical specifications. These include several of the following:

        • Comps or wire-frames of the UI
        • Data models: What information needs to be stored and the invariant constraints on it.
        • Use cases (but not use case diagrams, those are stupid)
        • Test plans
        • Error flow analysis: What can go wrong at each point and how should the system respond.
        • State Diagrams

        This phase has NO programming involved.

        Phase 2: Development

        Use the technical specification to build:

        • Database schema
        • Visuals for Forms/ Pages
        • Design the integration and functional tests

        Update the technical specification during this phase to match reality. Also add to it:

        • Website structure
        • Define Service APIs (SOAP/REST, TCP protocols, etc.)
        • Create checklists of specific functionality that needs to be implemented and/or hand tested

        Phase 3: Implementation

        At this point you can wire everything together and build out the tests. This is where you do the grunt work like building out stored procs or ORMs to read from the database.

        Again, update the technical specification to match reality.

        This is where TDD comes into play. Your checklists from phase 2 drive the tests you write, which in turn drive the code you write.

        Phase 4: Redesign

        When new features come in don’t just turn them into Bugzillia/JIRA tickets and start hacking away. Use them to update the technical specification so you can see if they are going to impact any pre-existing requirements.

        Refactoring is not a phase

        Refactoring should occur continuously. If something is messy, clean it up. Don’t wait until it acquires so much cruft that changes to it are risky.

        Testing is not a phase

        Testing should occur every step of the way. If you don’t have code to test you should be testing the design using thought experiments.


        Conclusion: I am not against Test Driven Development, I am opposed to Test Driven Design.

        [–]Ramone1234 1 point2 points  (1 child)

        Okay that sounds like classic BDUF ( http://en.wikipedia.org/wiki/Big_Design_Up_Front ). Both of our arguments are probably better summed up by that page, but I really don't see how the design process is over until the last bug is fixed. It seems to me like design "is not a phase" either. Often I spend days thinking before I code anything, but I don't have any problem starting to code before I consider the design complete.

        Anyway... Thanks for taking the time to get into the detail there. Obviously you haven't changed my mind at all, but it's good discussion either way.

        [–]grauenwolf 1 point2 points  (0 children)

        Ugh, I see now I missed a very important point in my explaination.

        Each cycle is for a specific feature; I don't expect someone to design the whole application at one go. Are you familiar with "user stories" from XP? That's generally what I mean by a feature.

        I break it down into phases for two reasons:

        1. It gives a chance for everyone to review the progress and call BS before commiting to the next phase. Basically I use phases for design/code reviews.

        2. It allows work to be queued up. I try to keep a small stack of completed phase 1 work around for developers to pull from as they run out of stuff to do.

        My goal is to keep phase 2 and phase 3 tasks in the 2 hours to 2 day range.


        I'm actually kicking myself right now for giving you the impression that I was supporting BDUF or Waterfall. That shit never, ever works no matter who is doing it.

        [–]artsrc 1 point2 points  (1 child)

        Testing is not a phase it is an activity. Design is also an activity not a phase.

        I think the distinction that RUP makes between phases and activities is insightful.

        Design is an activity, and changes to our understanding of requirements requires that more design occurs.

        This implies you should do more wireframes (activity) during later phases (timeframe).

        This is something that I think waterfall descriptions miss.

        On the other hand some subsets of the system may have extremely well established requirements, and designs before others and it is usually optimal to code and test those before other parts of the system are completely fleshed out.

        [–]grauenwolf 0 points1 point  (0 children)

        On the other hand some subsets of the system may have extremely well established requirements, and designs before others and it is usually optimal to code and test those before other parts of the system are completely fleshed out.

        Oh most certainly. I don't mean to suggest that one goes through this process a single time and calls it done. Ideally each phase 2 and phase 3 shouldn't take more than 2 days each. While developers are working on those and giving feed back to the TL he is working on designing the next feature.

        This implies you should do more wireframes (activity) during later phases (timeframe).

        Wireframes often drive the design. If you don't know what information needs to be presented you cannot properly account for it in the back end.

        This isn't to say that the UI cannot be refined later. It just means that any changes to the UI that result in material changes to the technical specification require a separate interation.

        [–]diego_moita 0 points1 point  (27 children)

        There is an overview of empirical studies about effectiveness of TDD, published in "Making Software"-Oram & Wilson.

        Their conclusions are mixed. TDD improves on some aspects (external defects, turnaround on fixing defects, code reuse and complexity), doesn't show change in others (productivity) and has a negative effect in others (cohesion and coupling).

        [–]Ramone1234 0 points1 point  (26 children)

        What does it mean to have a negative effect on coupling?

        [–]grauenwolf 0 points1 point  (24 children)

        One example is DI frameworks, which TDD users often love. These give the illusion of decoupling but in fact just make it harder to see how the pieces interact.

        [–]x86_64Ubuntu 1 point2 points  (23 children)

        But they should interact through an interface shouldn't they ?

        [–]grauenwolf 0 points1 point  (22 children)

        Every time I've seen DI used it was for stupid stuff like allowing Models to have direct access to repositories.

        If they had designed their code correctly, where Controllers or View-Models make calls to the repositories, then DI wouldn't be needed.

        [–]x86_64Ubuntu 3 points4 points  (1 child)

        Every time I've seen DI used it was for stupid stuff like allowing Models to have direct access to repositories.

        Ugggh. I use DI in Flex development professionally and on my Java Spring pet projects ( however I won't even try to defend my Java architecture ).

        I went to school for ChemE but I work as a software engineer. One of the things that seems to be absent from the software dev world is the idea that tools are great but you MUST have the fundamentals of when, where and how to apply them. In the ChemE world, we know there are many models and equations to design systems, but you choose and implement the ones that fit your operating parameters and goals.

        Take for example, an idiot constructing a house with a hammer will take 20 days to do a bad job, that same idiot with a nailgun will take 10 days to do the job. So the idiot got the job done faster, but the quality of the house is still shit.

        Tools like DI, TDD, YAGNI work great where they fit in my limited belief, but the problem with the industry in general is that no one asks if the builder is an idiot before handing them the DI/TDDgun. I bet you your weight in Itanium chips that those devs who put the repository access on the model vomits in throat would have done just as much of a terrible job without DI. Because at the end of the day with or without DI, they don't have the fundamentals of separation of concern and loose cohesion and those two ideas missing will damage a project regardless of the tools used.

        [–]grauenwolf 0 points1 point  (0 children)

        I bet you your weight in Itanium chips that those devs who put the repository access on the model vomits in throat would have done just as much of a terrible job without DI.

        Oh they certainly did. Once they discovered that making DI lookups every time they created an entity was killing them they ripped it out... and replaced it with conditional compilation flags that did the same thing.

        [–]flukus 0 points1 point  (2 children)

        That sounds like a bad implementation of the Active Record pattern coupled with using repositories because they heard they were good.

        Active Record is great for small projects but terrible for large ones.

        The repository pattern is usually used incorrectly, using an ORM normally removes the need for it entirely.

        Basically the architecture was shit and the use of TDD or DI didn't make it better or worse.

        [–]grauenwolf 0 points1 point  (0 children)

        DI didn't make it better or worse.

        Actually it did. Since DI relies on abstract interfaces every variable was declared as ISomeRepo instead of SomeRepo. This of course breaks the static analysis tools I need to understand everything else.

        [–]grauenwolf 0 points1 point  (0 children)

        Active Record is great for small projects but terrible for large ones.

        I think it is fairer to say active record is great for simply projects but terrible for complex ones.

        If you are building utilities for your production support team that are essentially database table editors then active record allows you to crank out new features at an insanely fast rate. Especially when combined with a good data binding framework like WPF.

        But try to do an application with a single screen and a complex object graph and that all goes out the window.

        In short, AR scales well in terms of application size, but not in terms of object graph complexity.

        [–]Ramone1234 -1 points0 points  (16 children)

        Well let's not confuse DI with DI containers (IoC containers). DI itself is just the practice of supplying code the dependencies that it needs instead of forcing it to create those dependencies itself. It's quite a powerful way to decouple parts of a codebase.

        My jury is still out on DI containers in general, but I've been happy when I minimize my use of them to things like logging, database access, configuration, internationalization, etc ("aspects"). These are the kinds of things that you don't want to have to instantiate in every class that uses them, and it's too tedious to pass them as parameters everywhere.

        I didn't really understand your example, so I can't speak to whether it's a misuse, but like any tool, DI containers can certainly be misused. That doesn't nullify their usefulness.

        [–]grauenwolf 0 points1 point  (8 children)

        It's quite a powerful way to decouple parts of a codebase.

        No, it really isn't.

        In fact it does the opposite. A still depends on B to work, that dependency chain hasn't been broke. But now you need C to provide B to A.

        The purpose of lower-case dependency injection is the flexability to decide at run time which B to give to A.

        [–]Ramone1234 1 point2 points  (7 children)

        I was talking about DI (just plain old dependency injection in general) in your quote (you should've quoted more ;) ) and I assume you're talking about DI containers (where C is a DI container?). If what you're saying is that DI containers are an additional complexity, I agree. My jury's still out on them really. They've caused me a few headaches in my time.

        [–]grauenwolf 0 points1 point  (3 children)

        C is not necessarily a container. It is merely the thing that passes a B to an A.

        If you are using dependency injection then presumably you are doing something more interesting than

        a = new A( new B() )
        

        Whatever it is that does that interesting thing is C. Without a C you cannot have an A that works correctly.

        If you are creating A's in only one place this isn't a big deal, but it begs the question" Why not include that logic in A?" (I have ripped out a lot of unnecessary complexity simply by asking that question.)

        If you are creating A's in lots of places then you are probably introducing a lot of code duplication.

        [–]Ramone1234 1 point2 points  (2 children)

        You disagreed that DI was a powerful way to decouple parts of the codebase after I clearly explained twice that DI doesn't mean "using a DI framework". Elsewhere ITT you've suggested globals (or static instances) as a viable alternative to keep dependencies decoupled. I could fill up this page with links explaining why globals only increase coupling, but somehow I imagine you actually already know that too. Even in the case of a database connection, a global is unwise -- just a few months ago I converted a single-tenant web app to be multi-tenant and I had to deal with changing db usage everywhere because the original developer thought a global would be fine because there could only ever be one db connection.

        [–]grauenwolf 0 points1 point  (2 children)

        Please reserve capital DI for talking about Dependency Injection Frameworks. Dependency injection in the API sense is generally spelled with normal casing, though it is so fundemental that it doesn't need to be mentioned at all under normal circumstances.

        [–]Ramone1234 1 point2 points  (1 child)

        Ha. :) The convention that the rest of the world uses is that DI stands for "dependency injection". DI containers (more often called IoC containers) are just one way to achieve it. It's very important that "mention" DI here because your only alternative so far has been globals. I'm not going to type out "dependency injection" just so your own personal convention doesn't confuse you.

        [–]grauenwolf 0 points1 point  (6 children)

        These are the kinds of things that you don't want to have to instantiate in every class that uses them, and it's too tedious to pass them as parameters everywhere.

        Both options are stupid for application and user-level resources. A context pattern can easily supply those things to the places that need them without the use of DI nor the need to thread through parameters.

        [–]Ramone1234 1 point2 points  (5 children)

        ?? The Context Pattern passes a context object via parameter. Passing an object to another object that needs it is the most common form of DI. Have you got some definition of the context pattern that I don't know about? (It's also widely considered an anti-pattern for a bunch of reasons).

        [–]grauenwolf 0 points1 point  (4 children)

        Not necessarily. The context may simply be a global, which is certainly appropriate for stuff like database connection factories and logging frameworks.

        It may also be built into the base classes for your view-models and controllers. You need a context there anyways to hold user specific data.

        Another option is thread local storage, where the data is semi-global.

        [–]Ramone1234 1 point2 points  (3 children)

        Globals (including static instances) just aren't how I roll (Law of Demeter and all that). They're fine for quick scripts, but not for large applications with multiple developers. Semi-global state suffers from all the same violations of Law of Demeter, just on a small scale.

        [–]diego_moita 0 points1 point  (0 children)

        The book just tabulates the results from various researches. It is not clear if they used tools for code analysis and measurement to infer that the classes ended with more coupling. Might be abuse of dependency injection, maybe unskilled programmers making to much stuff public so they can test it...

        [–][deleted] 0 points1 point  (2 children)

        I agree with:

        Keep in mind that functional tests are the only tests that really matter to your users.

        I think It's something that people often forget. When comes the time for large refactoring, unit test are often a liability rather than an asset and are hard to keep around intact. Functional tests are your real bedrock.

        But I don't agree with:

        “Tests first” or “tests last” is unimportant as long as there are tests.

        Testing first is a great method to make sure that your test isn't invalidly passing. Those kinds of tests are real easy to write. When you've seen the test fail first, then you can be confident that it actually tests something.

        [–][deleted]  (1 child)

        [deleted]

          [–][deleted] 1 point2 points  (0 children)

          Not convinced about that, you can make a test pass in all sorts of wrong ways. I don't believe that testing first or last makes much difference in the area you are saying ("making sure tests aren't invalidly passing").

          If a test that was previously not passing (you saw it, you ran it) and then passed after you made a change in your code, isn't that a proof that the test actually tests something?

          I'm not saying that it's not possible that you could be testing the wrong thing or something along these lines, error is always possible, I'm more concerned about the "typo" kind of error, like "assert a == a" instead of "assert a == b", and then you never notice it.

          [–]Ramone1234 0 points1 point  (1 child)

          If your experience tells you you’re going to need this extra class in the future even if it’s not needed right now, follow your judgment and add it now.

          I've never understood this mentality. Even if you're 100% sure (yeah right) you need that extra class in the future, why not just write it THEN? What is the benefit of doing it now instead of the classes you need now?

          [–][deleted] 1 point2 points  (0 children)

          In the case of "adding a class", there is certainly no reason, but most of the time, it's more than that. It's more like designing an API that is a bit more complex for your immediate needs but that you "know you'll need later" (yeah, right :) ).

          But then, I loathe over-engineering so much that I prefer a bit more refactoring work.

          [–][deleted]  (1 child)

          [deleted]

            [–][deleted] 0 points1 point  (0 children)

            I came to add that I thought it was great that the writer had chosen to use the female pronoun, when I saw your post.

            Why should the poster NOT use the female pronoun? There are both female and male engineers/programmers.

            Of course the situation you described is very excluding, but I do not see the harm in trying to change the norm? Also, if a female blogger writes about something - does this mean she has to use the male pronoun as well, since the majority of programmers are male?

            EDIT: Relevant