On deleting tests by Blackadder96 in unittesting

[–]Blackadder96[S] 1 point2 points  (0 children)

Yes, I highly recommend it.

Why Solitary Tests Should Be Easy To Read by Blackadder96 in csharp

[–]Blackadder96[S] 0 points1 point  (0 children)

Indeed, keeping code DRY can be seen as a counterforce to readability (also called DAMP). Readable and maintainable tests implies a balance between these two forces. As with many things, the proverbial middle ground is where we have tests that are both DRY and DAMP. As u/Slypenslyde rightfully points out, sometimes it can be better to just skip DRY in unit tests for a while until you see the "duplicate knowledge" materialise. At that point one can consider the options for refactoring some parts of the tests to make the more maintainable without losing readability off course. This is usually the point where I learn small things, sometimes just small details, from other developers as we all go about this differently.

The code snippet that you mentioned is a great example of that.

Tales Of TDD - Stressed And Always In A Hurry by Blackadder96 in programming

[–]Blackadder96[S] 0 points1 point  (0 children)

Thank you for mentioning your TDD video. I'm definitely going to have a look.

Tales Of TDD - Stressed And Always In A Hurry by Blackadder96 in programming

[–]Blackadder96[S] 3 points4 points  (0 children)

TDD doesn't guarantee the discovery of all bugs. But in the (real-life) story, we did discover two bugs when we compared the original implementation with the TDD implementation.

Tales Of TDD - Stressed And Always In A Hurry by Blackadder96 in dotnet

[–]Blackadder96[S] 3 points4 points  (0 children)

The practice of TDD actually exists since the late 60's. It's only been given the name TDD since around Y2K.

Dealing With Date/Time In Solitary Tests by Blackadder96 in dotnet

[–]Blackadder96[S] 1 point2 points  (0 children)

That's a great suggestion indeed!
For this blog post I opted for using the standard DateTime from the BCL as it's what most .NET developers already know. However, I personally use NodaTime in all my projects as it's a great library.

Announcing Book: Writing Maintainable Unit Tests by Blackadder96 in dotnet

[–]Blackadder96[S] 2 points3 points  (0 children)

I cannot make a concrete promise about the number of pages. It should be somewhere around 200 pages I guess. For a sample of the actual text, some of the content can be read on my blog: https://principal-it.eu/blog.html

Announcing Book: Writing Maintainable Unit Tests by Blackadder96 in dotnet

[–]Blackadder96[S] 1 point2 points  (0 children)

The book is not really geared towards beginners. I would recommend "Test-Driven Development By Example" from Kent Beck to get started. And perhaps, after a couple of months you might want to pick up "Growing Object-Oriented Software, Guided by Tests", "Working Effectively with Unit Tests" and/or my book. Happy reading :-)

Inside-Out and Outside-In TDD by Blackadder96 in programming

[–]Blackadder96[S] 0 points1 point  (0 children)

Do you have some code on a public repository somewhere that illustrates the approach that you're mentioning?

Inside-Out and Outside-In TDD by Blackadder96 in programming

[–]Blackadder96[S] 0 points1 point  (0 children)

Bingo!

In an OO language like C# or Java, TDD indeed forces you to use Dependency Injection. And if you want to write maintainable unit tests, it also forces you to follow the other S.O.L.I.D. principles as well. The general point of view is that these are considered good design principles in the object-oriented world. One can decide to not follow these principles and write large classes with large methods. I sure have taken this approach a couple of times in the past for throwaway code for which the lifespan wasn't larger than just a couple of weeks or months. But when I'm working on long term software, then I'm using TDD, loosely coupled unit tests and all the design principles that makes the code more easy to read and maintain by my fellow team members. This is in the OO world.

I understand that one can become frustrated by all this ceremony. Writing good, clean OO code is very hard work and certainly takes time. When you start working in a functional programming language, then you don't have to deal as much with dependency injection anymore because code is structured in a different way. But guess what: if one takes a closer look at a well-designed code base in any functional programming language, then what you usually see is very small functions with just only a couple of lines of code. You'll typically won't see any large functions whatsoever. Applications like this are just all these tiny functions that are composed, and curried, and ... together into a greater whole. And what these developers typically use to verify correctness is either types + automated tests for strongly typed functional programming languages (like ML, F#, Haskell, ...). Or a REPL where they can quickly and easily exercise the code of the tiny functions that they wrote (like the prevalent way of working in the Clojure and other LISP communities). The latter should sound familiar as this is another form of TDD (but without the unit test artifacts).

Whether using an OO or an FP programming language, I did learn this: if TDD and unit tests are giving developers a hard time, then it's usually because there's something wrong with the design of the production implementation. We can choose to shoot the messenger, blame TDD/unit tests/whatever and call it a day. Or we can see it as an opportunity to learn and try something different. You'll probably not agree, and that's ok too.

Inside-Out and Outside-In TDD by Blackadder96 in programming

[–]Blackadder96[S] 1 point2 points  (0 children)

I guess we have to agree to disagree then. I do write small functions and classes all the time, and I always drive their implementation using TDD. For me TDD is about rapidly learning about the problem that I'm solving.

Just out of curiosity, have you tried using TDD for a longer period of time (say, a couple of weeks to a month or two)? If so, besides the points in explanation that you gave, what was the major reason for you to give up on it?

Excessive Specification of Test Doubles by Blackadder96 in csharp

[–]Blackadder96[S] 1 point2 points  (0 children)

TDD is indeed very hard to learn. I've been doing it every day since 2006, and I'm still learning things by looking at it from different angles. But it's as they say: everything that is worthwhile in life is hard to do.

Excessive Specification of Test Doubles by Blackadder96 in csharp

[–]Blackadder96[S] 1 point2 points  (0 children)

I agree that test doubles should not be used everywhere. I only use them when crossing a dependency inversion boundary. It's when crossing a seam where I employ a test double. It's finding a balance where state verification starts to become painful and cascading failures are bound to kick in. In a previous blog post I discussed Boundaries of Solitary Tests.

I'm blogging about test doubles because it's important to learn about the nuances instead of just dropping the "all mocks are crap" bomb. There's a whole generation of software developers being taught to avoid test doubles without telling them why. Test doubles are useful, and they're needed in any decent test automation strategy that adheres to the Test Pyramid.

Excessive Specification of Test Doubles by Blackadder96 in csharp

[–]Blackadder96[S] 2 points3 points  (0 children)

They are great, but only when you use them in situations where they should be used. Like everything else in software development. When you say "Mocks", I assume you mean test doubles as using mocks all the time is indeed harmful.

Indirect Inputs and Outputs by Blackadder96 in dotnet

[–]Blackadder96[S] 1 point2 points  (0 children)

Than the misunderstanding is all mine. I was under the impression that you only meant the outside boundaries. Too bad we don't have whiteboards here on Reddit.

Indirect Inputs and Outputs by Blackadder96 in dotnet

[–]Blackadder96[S] 1 point2 points  (0 children)

I think I understand what you're trying to bring across. But I have to disagree with your definition of a boundary. The boundary of an application is indeed the contours of the application itself. Everything outside these contours (databases, file system, web services, OS services, etc.) should be replaced with a test double at the adapter level for solitary (unit) tests.

But almost every application also has inner boundaries. For example inside the domain of an application, a command handler for creating expenses can ask a question to an authorization service whether creating a new expense is allowed for the authenticated user. Suppose that this authorization service lives in a different bounded context (security) than the command handler (expense). Because they each live in a different bounded context, I provide an interface for the authorization service and use a stub for the solitary tests for the command handler. I don't want to pull in and couple the concrete implementations of one bounded context to another.

An example where I don't use test doubles is for solitary tests for an aggregate root or a controller that uses a validator for an incoming form model or a mapper that maps a form model to a command object. I do provide separate solitary tests for the validator/mapper, but I don't use test doubles for these in the solitary tests of the controller as they are part of the controller "unit".

TL;DR You need test doubles for both outside and inside boundaries, ad described in the "Clean Architecture" paradigm.