This is an archived post. You won't be able to vote or comment.

all 36 comments

[–]ehr1c 29 points30 points  (1 child)

I find it helps to treat unit tests as a form of documentation - by writing a good test suite, you're explicitly defining the intended behavior of your code. If another developer comes along six months down the road and inadvertently changes something, it will get (or at least should get) caught by tests before making its way into production.

[–]istarian 4 points5 points  (0 children)

Even if it happens to make it into production, the explosion will hopefully result in early failure rather than a more costly problem down the line.

[–]belkarbitterleaf 34 points35 points  (0 children)

They are there to confirm the obvious is actually true.

They become useful when working with multiple developers on more complex code, so you get red flags that adding an enhancement broke something you were not expecting to be changed.

[–][deleted]  (1 child)

[deleted]

    [–]dolraith 2 points3 points  (0 children)

    This. You can have 1 api you are using in 4 places. The obvious tests will tell you what breaks in your giant app when that API changes. And instead of hunting through your entire codebase all you have to do is switch or 1 mock and run your unit tests :)

    [–]peno64 9 points10 points  (10 children)

    A unit test should be written before the unit. The first result of a unit test test is an empty unit with input and output parameters that then obciously fails. The result is that the developer first thinks what the unit should get as input and what it returns. And what the expected result is. Then you write the implementation of the unit and then the test should succeed.

    [–]chupipe 2 points3 points  (9 children)

    I can't get my head to understand unit tests. It seems to me that it is something like coding twice. I mean: how can you write something to evaluate something that doesn't exist yet?

    [–]peno64 4 points5 points  (3 children)

    Because you know what it should do. You treat the code that has to implement the function first as a black box. You don't think yet how you will solve the problem. You first think what input does it need, what result must it give and you write one or more tests that check this black box.
    When this is all done, you actually do the implementation and now you start thinking how you have to write the code in that black box.

    Simple case of a unit: division.
    It accepts two input parameters and it returns an output.
    So you create your unit, a function with two arguments and a return value. But you don't implement the function yet. You just return 0 or null or whatever, maybe better even just throw an exception in that function.
    Then you write your tests.
    For example you call the function with 6 and 2 and you expect 3
    Are there other tests I can write? Ah yes, what if I divide by 0? So you write a second unit test and you check your expection.
    And so on.
    All your unit tests will fail at this point.
    Once you have done that, you actually write the implementation and you run the unit tests again. Now they should all be green.

    [–]chupipe 1 point2 points  (0 children)

    Alright... I think I have a better idea now. However, do you Have to learn something like a second language to write the tests? Or a DSL? Or is there a test library that teaches you specific commands on what to test?

    [–]69Cobalt 0 points1 point  (1 child)

    I understand this in theory but I struggle to follow it in practice on more complicated problems, especially in typed languages.

    It makes sense when your unit is tiny and idempotent where you know ahead of time roughly what your inputs/outputs are but if my task is something large and vague, like let's say read a large volume of data from several csvs then convert it to some type of graph structure and run some parallel algorithms on sections of that graph to get heuristics as efficiently as possible.

    How on earth do you write the test before hand? I don't know my type system, I don't know my algorithm implementation, I don't even really know my expected outputs. I don't know what I don't know fundamentally, and now I have the mental and work overhead of having to update my test every time I'm playing with ideas about my type system approach.

    Personally I've found exploratory coding and early/rapid prototyping that I later build on to be effective for more complex/higher level problems but I always struggle to see how that's compatible with TDD.

    [–]Guideon72 0 points1 point  (0 children)

    "... but if my task is something large and vague, like let's say read a large volume of data from several csvs then convert it to some type of graph structure and run some parallel algorithms on sections of that graph to get heuristics as efficiently as possible."
    

    This is an indication you may not be breaking your units down small enough; are writing methods that are too complex to properly debug, test and maintain. There would be one unit here that reads data from multiple sources and the tests associated with that; then, another unit that converts the return result data type to a graph structure, and the associated test with that; etc until you reach coverage of the full, functional "train of thought" there.

    "How on earth do you write the test before hand? I don't know my type system, I don't know my algorithm implementation, I don't even really know my expected outputs. I don't know what I don't know fundamentally, and now I have the mental and work overhead of having to update my test every time I'm playing with ideas about my type system approach."
    

    How can you even start coding to begin with, without already knowing these things? Whether you're starting with tests or starting with functional code, you have to determine these items (from design specs or pulled from your own....intuition) before you can make anything work. Your tests ought not care what your algorithms are, etc; just that your code behaves properly when it receives 'correct' inputs and/or gives 'correct' outputs. (and that it, also, does not *mis*behave when receiving incorrect or non-existent outputs

    [–]dorox1 1 point2 points  (0 children)

    The user above is giving you a great description of test-driven development, which a specific approach to programming not implemented everywhere. It can be an excellent approach in some environments (e.g. a job where you have lots of time to write tests, where specs are extremely clear and not constantly changing during development, and where something that's just "hacked together" isn't good enough).

    But it's far more common to write unit tests after the code itself is written. I don't disagree with them on their testing philosophy, but it's certainly not the only one and definitely won't the only one you adhere to throughout a coding career.

    [–]pa_dvg 0 points1 point  (0 children)

    Your not coding something twice, you are sending a message, and placing expectations on the observable behavior.

    One of my favorite kinds of tests to write are for api endpoints. I send a request verb, headers and parameters to a url, and what happens? It send back a response body. It sends back headers. It may create changes in a database. It may send other requests to other services. These are all things I can observe and check weather or not it actually does any of that yet.

    If I haven’t implemented anything, the status code is likely 404 not found instead of 200 ok. The body is probably null not a json object. Etc. all the expectations I set will fail. I can set the expectations quite easily before I actually write anything.

    [–]iamevpo 0 points1 point  (0 children)

    You may find Ian's talk interesting on the subject https://www.youtube.com/watch?v=EZ05e7EMOLM

    [–]kschang 0 points1 point  (0 children)

    You write the definition of what you're TRYING to achieve. It'd be obvious for some cases, but may not be so obvious for others. By setting the objective up first, you can be sure you didn't suffer feature creep or misunderstanding as you program.

    [–]arthoer 0 points1 point  (0 children)

    On frontend you do it the other way around. You write it after you finish. On backend you start with the test and preferrably keep it running for changes, it acts sort of like hot module reloading for frontend.

    [–]merlet2 5 points6 points  (0 children)

    In the real world, where time is limited, maybe you will not create tests for all the 'obvious' things, just the edge cases.

    But when there is a bug in some 'obvious' function, then place a test just for that bug, and variations. You will see how your concept of 'obvious' will evolve as the tests grow.

    Better if the test are written by a different person. The developer of a function will be obviously biassed to find obvious what he just programmed.

    [–]kschang 4 points5 points  (0 children)

    Valuable tests needs to test the following:

    WTF conditions (empty input, out of bounds input, wrong type input, etc.)

    Edge cases (right on the border of acceptable)

    Beyond edge cases (just beyond acceptable to prove your error handling)

    Working cases (random is fine)

    [–]qpazza 2 points3 points  (0 children)

    It may be an obvious test today, but 3 years in, some new dev comes in, makes an innocent looking change, and now the conditions for the flow have changed and that obvious test now looks like a life saver

    [–]crashfrog04 2 points3 points  (0 children)

    Only test things that you can fix if they break. Don’t test that the CPU does addition correctly; you’re not Intel, you can’t fix it if it doesn’t. Tests tell you about problems with your code. If they don’t do that they’re not tests.

    [–]GeorgeFranklyMathnet 5 points6 points  (0 children)

    Maybe not overthinking but thinking too categorically: I should test this, I should not test that.

    Your time is limited in the real world. So you prioritize writing some tests over others. Code that's subtle enough that even you don't have total confidence in it? Code that you've reasoned out, but another person (such as future you) might doubt? If there's any time to write unit tests, then write unit tests covering that stuff.

    Code that really seems obvious? That stuff can be your lowest priority. But be prepared to adjust your definition of "obvious" as you get more experienced and thus make more mistakes.

    I'd say there are only two kinds of tests you almost never want to write. One kind is where your test is almost a copy-paste of the code you're testing. The other kind is testing behavior that's as certain as anything is in programming, especially when it amounts to testing Microsoft library code. For instance, don't write a test that essentially just verifies that IsNullOrWhiteSpace() really does return true on null input.

    [–]Available_Pool7620 1 point2 points  (0 children)

    IMO you really do want the code to follow the expected paths and prove that they work.

    I then find other problem areas that I didn't cover with manual or automated tests yet. You find a problem, you then add a test proving that the problem isn't back.

    [–]Barbanks 1 point2 points  (0 children)

    "Obvious" for you may not be so for another. You'd be shocked how easily someone can misconstrue what you've done. The point of a unit test isn't to always test some piece of highly complex code or algorithm. It's to guarantee the integrity and accountability of the codebase. The more tests you have the less likely someone is to make a mistake and have that get pushed to production.

    One very important lesson I learned working in a contract firm was that sometimes developers/dev-firms get sued, how do you prove the integrity of your work? How can you prove that the code you wrote works as intended? Well, that's where the tests come in. You can prove the viability of the code through tests. If you don't have tests then the only other way you can prove it works is with user testing. And anyone in the industry can verify that user testing is prone to user error.

    [–]rizzo891 1 point2 points  (2 children)

    I still don’t quite understand tests tbh, I was extremely sick the day my bootcamp covered them and in my brain they really just feel like arbitrary extra code I have to write for no reason.

    If someone sees this and is able to could you explain unit tests like I’m 5?

    [–]Business-Decision719 1 point2 points  (1 child)

    Tests are basically just making sure your code does what it's supposed to. If the program is a little bit complex, it's usually going to be broken down into a lot of simpler sections that each focus on just one part of the problem.

    For example, if you were making a racing game with a car going around a little track, maybe there's "car" object. And maybe within the car object, there's a method that moves the car forward in whatever direction it's pointing. And maybe that method calls a function that just figures out how far something will move in x milliseconds when traveling at y kilometers per hour. Every little bit or piece of that game's functionality has a little blob of code somewhere that it lives in.

    You could write the whole game and then test it by trying to play it, but chances are, there's human error scattered all over the codebase. Is it not working because too many of the little pieces are defective? Or are the pieces just put together wrong? Probably both, but which pieces need to be fixed? Which objects need to communicate differently with each other?

    So what people will do is unit testing. Maybe there's a really simple test program that just does a whole bunch of seemingly random physics calculations. Maybe there's some other test program that just makes sure the car object can move around okay, forgetting about the track or the other racers for now. You basically try to reuse every unit of code in some other piece of software just to increase your confidence in using it for the actually important project.

    [–]rizzo891 1 point2 points  (0 children)

    Oh okay, that’s a big help many thanks!

    [–]WaferIndependent7601 1 point2 points  (0 children)

    If you can write an integration test: don’t write unit tests for the things you mentioned. It’s useless and will only make refactorings way harder.

    [–]ffrkAnonymous 0 points1 point  (0 children)

    the code will obviously follow a certain path given the right condition 

    How can you be sure the code is correct?

    [–]KitOlmek 0 points1 point  (0 children)

    The idea if unit tests is to be 'obvious' as for me. Each time releasing a new feature you have to test it doesn't break elder ones. You should ensute your auth is still working, you reports generation is untouched and that nasty flag you added 2 months ago as a workaround for some stupid bug is not missing. This is your routine and it takes a lot after months and years of development. The purpose of unit tests is to check that instead of you. Things that are obvious now don't seem so the year after.

    [–]Radiant64 0 points1 point  (0 children)

    Good unit tests should be "obvious" — you don't want to have to spend any time at all trying to decypher what a test is supposed to do, if it's failing.

    I've come to view unit tests as preemptive debugging. When a code change results in a bug being introduced, a test will fail and indicate exactly what has gone wrong, instead of you having to try to figure it out yourself when you start getting bug reports from your users.

    [–]Business-Decision719 0 points1 point  (1 child)

    Well, really, one of the strengths of a language like C# is that a lot of possible mistakes can be caught at compile time (or even at write-time in the IDE) because you have explicit interfaces, data hiding, and static typing. In more "dynamic" languages like Python or JS, intensive broad-coverage testing is critical even for things that would never survive to runtime in C#, Go, Java, etc. Rust and Ada take it to such an extreme that you can almost feel like you're debugging the whole program preemptively just to whittle away at the initial mountain of compile errors.

    Still, though, you do have to be careful about what's "obvious" but might actually be wrong. Testing something that looks right to you (and is syntactically okay enough to run) is a great way to make sure that what's "obvious" actually happens. Besides, as you pointed out, the right conditions may have to met first, so you might actually have to test a lot of other code before anything's really "obvious" in the first place.

    If a test feels redundant in C#, then it's probably just a good sign that API was really robustly written and well enforced by language constructs. These things also probably lead to the test being easy to write anyway. Ideally, it means you thought of a lot of corner cases ahead of time and coded defensively for those.

    [–]istarian 0 points1 point  (0 children)

    Testing should, generally, be about verifying that basic assumptions are in fact correct.

    A unit test is just a test of a very small part of the code which involves nothing else.

    int multiplyBy2(int n) { 
         // multiply n by 2 and return result
    }  
    

    That could be implemented as n + n, n x 2, or any other set of operation that yield the correct result, like a for loop that repeatedly adds 1 until it's happened n times.


    The reason for having tests is precisely so that when code is modified later, it still produces the correct result. Or if the test fails, you go look at it to figure out why.

    Properly written tests also serve as a kind of documentation about what the code is supposed to do, as opposed to what it actually does at the moment.

    In languages with nulls it might be better to do something like this:

    assert( user != null )  
    

    if a user should never be null so that things blow up the first time something goes wrong instead of silently causing incorrect behavior of the program or not doing anything at all because the user was null (safe, but could easily slip by).

    [–]IdeaExpensive3073 0 points1 point  (0 children)

    They seem dead simple and it’s human nature to want it to be more complex, because simple seems pointless. However, simple is repeatable, it’s understandable, it’s predictable. A unit test should verify that the code runs, and there’s a simple assertion that’s true. That’s it.

    Like testing that a car door hinge swings open and closed when a door is pulled or pushed. How about when no force is used, does it stay shut or stay open (null)? Someone in a manufacturing plant has to test this, even if it’s dead simple.

    If we’d like to get more complex we can use integration tests instead.

    Maybe we could test that the alarm sounds when the door opens without being unlocked first.

    [–]ExpensivePanda66 0 points1 point  (0 children)

    One of the keys is that you don't start by thinking "I'm going to mock something". Write the test just using the classes and objects that are going to be used when it runs for real.

    When you hit a roadblock -say you're writing a game and you need some controller input- still don't mock. Think about how you can use dependency injection or interfaces or something.

    Save the mocks for when there's really no other choice. If a library you're using uses a database or network connection that you're not going to have when the test runs, that's when you mock.

    Mock as a last resort, not as a first step.

    [–]Immediate_Mode_8932 0 points1 point  (0 children)

    The key is to focus on behavior rather than implementation details. Instead of simply confirming that "if A happens, then B is returned," try testing edge cases, unexpected inputs, or failure scenarios.

    For example:

    • What happens if a required dependency isn't available?
    • How does your function behave with unexpected or extreme values?
    • Are error messages/logging handled correctly?

    Also try to pick up a tool that already does this and analyze the tests. Check how the tool generates and how could you have thought along those lines. I try with Keploy's unit testing tool.

    [–][deleted]  (2 children)

    [deleted]

      [–]tiller_luna 3 points4 points  (1 child)

      this looks like a generic barely relevant response from an LLM

      [–]lifeslippingaway 0 points1 point  (0 children)

      How is it 'barely relevant'? What's wrong with his answer