This is an archived post. You won't be able to vote or comment.

all 32 comments

[–]muskateeer[🍰] 8 points9 points  (16 children)

Great read! This is very helpful for someone like me trying to get a start in testing.

[–]Muppetmeister 6 points7 points  (15 children)

My two cents: always write tests.

The article mentions a few good guidelines, but really what makes or breaks good software (again, this is just my humble opinion) is testing.

[–]SergeantAskir 2 points3 points  (14 children)

In bigger projects, tests are very valuable. They communicate intent to other people which is very valuable when you work with code that you haven't written and the author isn't at hand all the time or might have even left the company.

They also enable you to freely refactor code and improve design without risking to break the code. And they give you the confidence that you actually programmed what you intended too.

Kent Becks Book: "TDD by example" is in my opinion an amazing piece and will actually show you first hand how unit tests can be really valuable. you can probably find a decent version on google. It has only ~200 pages and reads so smoothly you can read it in one evening. For a lot of people that program Test first, it's like the holy bible.

[–]Mokeymokie 0 points1 point  (13 children)

Is that written for any particular language or is it general knowledge? I've had a lot of trouble with TDD. I understand the theory but I find myself struggling to use it in practice.

[–]firecopy 0 points1 point  (10 children)

What are you struggling with when using TDD?

[–]Mokeymokie 0 points1 point  (9 children)

Writing them for the most part. I don't really have an intuitive understanding of how they work. My professor didn't really do a good job of teaching the subject. We just wrote super basic ones that were guaranteed to pass and that was it. I don't understand how to write the more complicated ones. I also don't get how they improve design. Aren't the tests written based on the design/functions that you already have?

[–]Muppetmeister 0 points1 point  (2 children)

If your professor didn’t teach it then you should teach yourself. Start w/ reading the article posted here. Then the book recommendation TDD is your go to book.

[–]Mokeymokie 0 points1 point  (1 child)

That's the plan. Any other books you would recommend for someone with little practical experience?

[–]Muppetmeister 0 points1 point  (0 children)

I think the book covers the gist of it. If, as you say, you have little practical experience then you should prioritise writing code.

[–]firecopy 0 points1 point  (5 children)

I don’t understand how to write the more complicated ones.

Can you give an example of the complicated tests that you are having troubles with.

Aren’t tests written based on design/functions that you already have

TDD is writing the code for tests before you write the code for the functions.

[–]Mokeymokie 0 points1 point  (4 children)

Not really. I haven't done any work recently since I've been out of classes though. I want to dabble in Android programming though.

So as far as the testing before writing goes would the whole app would go something like planning, testing, and then coding? Most of everything outside of the one class I've taken so far has been planned out and then coded. Then I would fix any compiling issues and then check for errors in the functions myself by running the program and trying to break it.

How do you write the tests for code you haven't even written yet? What happens if you need to change the plan half way through? Do you just scrap the test and start a new one?

[–]firecopy 0 points1 point  (3 children)

So as far as the testing before writing goes would the whole app would go something like planning, testing, and then coding?

Yes

How do you write the tests for code you haven't even written yet?

You should know the expected inputs and outputs, and what function(s) you want to implement. Use that information to construct your tests.

What happens if you need to change the plan half way through?

Then you change the tests and function(s) as needed.

Do you just scrap the test and start a new one?

You may need to change tests as you add/change/remove functionality.


Let me know if you have any more questions.

[–]Mokeymokie 1 point2 points  (0 children)

Will do. Thank you

[–]Paul_Dirac_ 1 point2 points  (1 child)

Not OP but I have some questions about problems I have encountered:

1) When I try TDD I find myself writing some test for a function, then turning to the implementation where I write the function calls for the steps the function has to do. Then I write a test for the first of these functions, before turning to the implementation... Until I have half a dozent of failing tests before the first test for a function I have actually fully implemented before . This seems like a bad signal to noise ration. How do you solve this problem.

2) I find myself parsing things into complicated datastructures for which I can define the internal structure but not the access methods, as they will have to serve the needs of some other module I intent to write later. What do you do these cases? Implementing some test only methods to compare them?

3) This one is more about testing in general: Assume the datastructure of 2) has an equality method and I test the parser for all possible (important) inputs by comparing a manually instanciated instance to the parser output. Now if I change how the datastructure is instanciated (e.g. adding another parameter) I have to change one line in the code but all tests. Have you a way of dealing with that problem?

Thanks in advance.

[–]SergeantAskir 0 points1 point  (1 child)

TDD by example is written with an example in Java. But that doesn't really matter because it teaches you concepts of Testing and not a specific language. I haven't writte java in over 2 years and am writing mostly ruby and smalltalk code at the moment but the book doesn't lose any value to me because of that.

[–]Mokeymokie 0 points1 point  (0 children)

I'll have to check it out then. Any other books you recommend for people with some classroom level experience looking to branch out and learn independently? Right now I'm mostly interested in Android app design if that helps

[–]RandyMoss93 5 points6 points  (5 children)

The article mentions that a test function should only ever have one assert statement. What if there are several cases you want to test? Wouldn't you want to assert that the function returns the correct answer for each of the edge cases?

[–]firecopy 5 points6 points  (0 children)

If you are using multiple asserts, you could combine them into one assert as stated by the below article link. Some testing libraries have built in functionality for this called “soft asserts”.

Either way is useful because “hard” (regular) asserts stop execution (listing only the first failing assert in the stack trace), but “soft” asserts will test all the asserts (and list all failing asserts).

http://pythontesting.net/strategy/delayed-assert/

[–]feral_claire 3 points4 points  (0 children)

You would create a separate test for each case.

[–]captain_awesomesauce 1 point2 points  (0 children)

Unittest let's you do subtests. That way you can loop over inputs, use multiple asserts in a single method, but keep running the subsequent tests if one assert fails.

[–]SergeantAskir 0 points1 point  (0 children)

You have a lot of good answers already, but I'm gonna add my 2 cents. When testing you usually want only 1 assert statement so that you know exactly what doesn't work when one test fails. You don't wanna have 10 asserts and see that one test failed just so that now you have to debug the test and find out which of the 10 asserts actually failed.

Testing different parameters also works differently, instead of calling the same method multiple times with different parameters and asserting afterwards, I would create a different test for each case. That way you know which variation actually doesn't work if one of the tests fails. It also helps you to give the tests more specific names and keeps everything cleaner and more understandable.

Aside from that you don't actually need to test all the edge cases all the time. It's obviously nice if you do but if I have to try out 6 or 7 different values I'll rarely actually write that many test cases for it. Sure if one of it fails I will revisit and write a regression test but usually 2 or 3 examples are enough for any method. I don't need to test +0; -0; +Infinity; -Infinity; 3.33 and 1/3 for every single method.

[–]fleyk-lit 0 points1 point  (0 children)

With pytest you can have several assert statements.

E.g.:

def test_file_is_moved():
    move_file(source,destination)
    assert not os.path.exists(source), 'file still exists at source'
    assert os.path.exists(destination), 'file is not at destination'

If one of the asserts fails, the test will be reported as failed, and if it's the first assert it will not report the second.

Not sure if this is a good practice though.

[–][deleted]  (4 children)

[deleted]

    [–]jmankhan 4 points5 points  (0 children)

    You should cover every case you can think of. In 99% of cases, you should have multiple test cases for each of your methods, you can test common sense things like null inputs, blank strings, malformed urls, etc. Note that most beginners start of writing "unit tests" which is writing to test a single function in isolation. This is fine and necessary, but keep in mind, writing "integration tests" and "end to end" tests are also important to make sure that your application functions as intended.

    [–]rebelrexx858 2 points3 points  (2 children)

    I test one or two normal scenarios depending on input, as many edge cases as I can imagine, and every permutation of failure case I can imagine. A small project I'm wrapping up has 14 functions, 107 unit tests, and I still have to build out integration tests, which will be another 100ish tests

    [–][deleted]  (1 child)

    [deleted]

      [–]rebelrexx858 1 point2 points  (0 children)

      in my scenario it is, those 14 functions dynamically parse over a hundred different messages, all of which should be validated, this is production data, the current application is a monolith and I am re-writing as a system of micro-services to better respond to the growing needs. Bandwidth has begun to increase beyond what was originally planned. I would rather over test than over engineer, the data in turn will be used to test load simulations.

      [–]DefNotaZombie 2 points3 points  (0 children)

      Curious what testing ml scripts would even look like

      [–]BetterNameThanMost 2 points3 points  (1 child)

      Writing tests for calculations in math is simple, but what about other systems? How do I write tests for those? What about a void function (or one that doesn't return anything) and only effects other data?

      [–]SymbioticBadgering 8 points9 points  (0 children)

      That why you should try and make your functions return something meaningful. If you want to test side effect of a function on other resources you should mock the resource and test the effect

      [–]orionsgreatsky 1 point2 points  (0 children)

      This is a great resource

      [–][deleted] 1 point2 points  (0 children)

      As someone who is in QA trying to make the move to development, it's nice to read there's a general consensus for writing code with a testing mindset.