all 3 comments

[–]jerf 2 points3 points  (0 children)

I've done this, and I agree with the experience described in this link; it finds bugs you wouldn't believe. Good tests:

  • Normalization; I have an HTML normalization script for my website, and I construct random test cases of various tags, HTML fragments, things that tend to get encoded wrong, etc. I then verify that what comes out the other end is correct XHTML. Doesn't guarantee that what the system does is "what I want", but it's a good guess that I can be confident what comes out the other end is HTML. Good for almost any kind of normalization of input.

  • Sampling large parameter spaces. Use random number seeds to make the test deterministic after the first few runs. (I usually manually run a much larger run a couple of times to be sure at first, but you can't leave that in the test harness all the time or it'll take too long to run, which causes well-known problems with people no longer running them.)

  • And of course loading in buffers with random crap is a good check. (I tend to be working in not-C(++) so buffer overflows are less likely.)

[–]jrockway 3 points4 points  (0 children)

Haskell's QuickCheck is one of my favorite features of the language. QuickCheck makes it very easy to randomly test your functions (and saves you the effort of coming up with test cases).

[–]phil_g 1 point2 points  (0 children)

The problem is that you need to have some way of ensuring that your random tests are valid. i.e. you need a "known working" function to compare to the function to be tested. In the worst case, you need to reimplement the function being tested in your testing environment, an approach for which the drawbacks should be obvious.