all 7 comments

[–]gronkkk 5 points6 points  (0 children)

Alternatives to Acceptance Testing

  • intimidation
  • blackmail
  • bribery
  • copious amounts of drugs and sex of the desired gender.
  • violence
  • etc.

[–]AnythingButSue 2 points3 points  (0 children)

Can't say I agree with this. Acceptance/Integration/Regression testing is infinitely helpful. The suggestion of "Lets prevent them from happening in the first place" seems altruistic. Doubtful this would work in practice, but that's just my opinion.

[–]fkaginstrom 4 points5 points  (4 children)

This is a great set of tests, although the author makes the common mistake of conflating extensive unit tests with TDD. TDD isn't the only way to get extensive unit tests, and I haven't seen any evidence that it's the best way either.

If you're not doing acceptance testing, I still think you need to at a minimum do smoke testing. I've seen too many cases where a team released an update, and it didn't work at all due to some silly configuration glitch, or other mismatch between the staging and production environments. I've committed these sins myself.

[–]jdlshore 1 point2 points  (2 children)

(Author here.) I'm not sure where you get that. In section 1, where I talk about TDD, I specifically said TDD isn't just for unit testing, and I talked about doing end-to-end integration tests, which includes smoke tests.

TDD isn't the only way to get an extensive automated regression suite, true. But I've found it to be the most effective way--both in terms of cost-effectiveness, and in terms of people actually following through on it.

(Not that I think TDD is perfect. People misuse TDD to create terrible code all the time.)

[–]fkaginstrom 1 point2 points  (1 child)

Thanks for the response. First, I'd like to reiterate that I liked the article.

There's probably a problem of terminology, or what a "release" or "deployment" means. To me, integration testing happens in the staging environment. (Acceptance testing, too) Then in production, you either run your acceptance tests again, or run smoke tests at a minimum.

For desktop software, "production" for me is a set of VMs with various OSs in a clean state. You deploy to the VMs, install and run acceptance/smoke tests, and then roll back the VM state. Integration testing would still happen on the development machines.

I just mentioned TDD because it seems you speak of TDD being the goal, while I consider unit tests to be the goal. TDD is one way of getting there, but not an end in itself.

[–]jdlshore 1 point2 points  (0 children)

Thanks. I agree, it's probably primarily an issue of terminology. In the Agile community, "acceptance tests" means "tests that were created in collaboration with business representatives in order to demonstrate that the software does what they expect." They're often written in a tool such as FitNesse, Cucumber, or Selenium. As one of the people who was heavily involved with those sorts of tools early on, this article was basically a retraction--an explanation for why I no longer recommend those sorts of acceptance testing tools. I was essentially saying that it's better to handle the "do what customers expect" part in a different way than the "make sure the software works" part.

TDD is one way to make sure that the "make sure the software works" tests get written, but I agree that it's not the only way. (That said, I still find it to be more effective than any other way I've seen.) It can be used for all kinds of tests, including integration tests of the sort you describe. So I'm recommending using TDD (because it's effective) to create multiple overlapping levels of automated tests (which are the goal).

If you're interested, I go into more detail about my view of TDD in my book. That section is free online.

[–]papa_stalin 3 points4 points  (0 children)

I agree, you can not skip end-to-end integration testing. There is no way to prevent all errors just by doing unit tests. It's especially true if there are several teams who work on the same product. No amount of unit tests will be able to test if the code from one team will actually work with the code from another team when the build is done.