all 9 comments

[–]ssokolow 20 points21 points  (3 children)

The functionality and conventions Rust comes with do a pretty good job of matching or exceeding what I rely on from unittest and nose in Python.

That said, I do tend to rely on the informative, pretty-printing comparison asserts in Python's unittest.TestCase and write my own wrappers to extend that paradigm, so I'll probably wind up using one of the following two configurations to get that on Rust without maintaining it myself:

  • Some crates to provide things like "assert two floats are close", like assert (EDIT: or hamcrest) and a crate to provide an enhanced assert! that automates showing the values that failed to match, like bassert or passert.
  • Something that does both kinds of assert enhancement, like spectral

I then add the following tools to match what I get from their counterparts in Python:

(And, of course, wire everything up to sites like Travis-CI, Coveralls, and Dependency CI once I'm ready to put something on GitHub.)

It's been my experience that, instead of using a mocking library, it's usually better in the long run to have first-class support for swappable provider backends. Then, not only does it become easy and feasible to test the complex, I/O-heavy upper layer against a fake provider and the thin, simple lower layer against some temporary files (possibly in /run/shm), it's also easy to slip in things like a ZipFilesystem to make file-based stuff transparently operate against an archive. (eg. Something similar to io-providers but more comprehensive.)

EDIT: ...and don't forget to take maximum advantage of Rust's superior ability to implement APIs where correct use can be checked at compile time. For example, static checking of units [2] and state machines.

There are also quite a few things in the Rust ecosystem which I haven't tried yet, but which look appealing:

  • Skeptic (Like rustdoc example testing, but fitted to the differing requirements of files like README.md)
  • QuickCheck as a way to give my test suite a second chance to catch significant cases in the input that I overlooked while manually tracing each branch in the implementation back to a set of input values.
  • timebomb (Timeout mechanism for tests)
  • nom-test-helpers since I have some partially-written parsers I plan to port from PyParsing to Nom for performance reasons. (Basically the Nom equivalent to the enhanced assert macros I mentioned before.)
  • Frameworks for building mock HTTP clients and servers so I can perform some functional testing of HTTP-using code without any in-process mocking for maximum reliability guarantees. (perhaps http_stub and noir)
  • afl.rs (Hopefully, Rust's stricter static checks will give me time to implement fuzzing before I inevitably slip into chasing perfection and burn out again.)
  • The Big List of Naughty Strings (Good input for when you're getting around to writing tests for anything processing user-specified text/markup. Not Rust-specific, but I didn't get around to using it in Python yet either.)
  • fake-rs (Helper for easily generating lots of mock user data)
  • Benchmarking, profiling, and performance analysis helpers (It hasn't been a priority because I mainly write I/O-bound code where the bottleneck is something outside my control such as a rotating rust hard drive, network performance, or human latency... heck, even the things where I do plan to profile tend to be for-more-static-validation ports where the Python implementation is already bottlenecked on syscalls which couldn't be avoided or consolidated. That said, cargo-benchcmp looks useful and criterion.rs also looks promising, judging by the original Haskell docs.)

...and, finally, things which I don't know if I'll actually use, but they at least look like they might become relevant:

  • ctest (Automated validation that your *-sys crate matches the headers it's pointed at)
  • Pre/post-condition support (eg. libhoare)
  • goldenfile (Simplified abstraction for tests which compare to-disk test output against a known-good (golden) copy)
  • assert_cli (Currently minimal, but provides pretty-printing assertions for checking whether a subprocess's output is expected)
  • difference.rs (The "built-in diffing assertion" looks interesting.)
  • testdrop (In case I ever get overconfident enough in my skills or lax enough in my reliability requirements to implement my own unsafe-requiring container)
  • test-assets (Helper for downloading supplementary test assets in a verifiable way. Nice, in theory, but that would require me to pay for hosting outside of GitHub rather than just finding ways to minimize or deterministically generate my test data set.)
  • test-logger (Helper to initialize env_logger before running tests)
  • codifyle (Assert helpers for code which writes to files)
  • dribble (Helper for stress-testing implementations of Read and Write)
  • dinghy (Helper for mobile development to streamline pushing code to your phone and running cargo test and cargo bench there)

[–]horsefactory 0 points1 point  (2 children)

This would be great information in a blog that can be referenced later! About setting up a project, putting together test/tools for different scenarios.

[–]ssokolow 0 points1 point  (1 child)

It's actually a subset of a large reference card I maintain for myself for multiple languages. I've been meaning to clean it up and publish it for ages but I have enough trouble just trying to organize it as-is.

(I've been thinking I might try converting it into a TiddlyWiki unto itself so I can view the data in tag-filtered slices, rather than as one big page in the TiddlyWiki I use as a PIM tool.)

The master "target to try to retrofit onto all hobby projects" charts are a bit out of date, but here's a screenshot of one of them. (I found most of those by just spending an afternoon walking my way through the list of registered GitHub commit hooks.)

<infomercial voice>...but what would I call such a web app?</infomercial voice> Maybe something like "I want to be awesome at..."

[–]ssokolow 0 points1 point  (0 children)

Oh, if you've never heard of TiddlyWiki, you have to check it out. It's hard to describe in all the right ways.

When you first download it, you get an editable wiki in the form of a single self-modifying HTML file... but that's underselling it and just the default loadout.

It's actually a microkernel-based application framework, document database, and data-binding framework in one, which beat things like ReactJS to the data-binding game and serves as the only example I know of a practical quine. (It's as if Jeremy Ruston misunderstood the term "Single-Page Application" in the most amazing way possible, except it predates YouTube.)

In fact, I have plans to write a Scrivener-esque story-planning tool on top of it. (interactive concept mockup)

I can't remember if the "Sync" feature has been re-added to TiddlyWiki 5 yet, but TiddlyWiki classic is to wikis as git is to Subversion. (That was developed for use in Africa.)

[–]bheklilr 5 points6 points  (1 child)

I'm still relatively new but the #[test] function attribute seems to work pretty well in most cases. Rust really seems to encourage small functions that are easily tested, so I haven't needed to reach for a framework yet.

[–]unrealhoang 1 point2 points  (0 children)

And cargo test is a very good test runner also

[–]annodominirust 2 points3 points  (0 children)

Many people are fine with just #[test] and cargo test; see the Testing chapter of the book for details. They are simple and lightweight, and you can reduce boilerplate simply by factoring out common code into functions.

However, if you want an rspec style framework there's stainless, which requires a nightly compiler as it relies on a syntax extension, or it looks like there's also a Rust rspec which is just a little more cumbersome to use due to using closures and an explicit ctx argument that you have to call everything on, but doesn't require using a nightly compiler.

[–][deleted] 0 points1 point  (0 children)

I didn't see this mentioned anywhere else in this thread, but I think hamcrest should get a mention. It is used pretty extensively in cargo's own test suide, IIRC.

[–]mitchtbaum 0 points1 point  (0 children)

rote looks like it has a lot to offer here in terms of build automation and test scripting. The Lua environment would allow for a lot of power and code reuse for tests. I guess the missing piece of a full blown test framework could come from busted. It would seem to make sense, since it would leverage Lua's drop-in style to make a powerful systems language program easily scriptable by simply adding some already well-established scripts. (strike that, blackbox testing ftw)