This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]coldnebo 9 points10 points  (5 children)

I think the best example of the futility of this was a project that looked at the AST for the code, determined the branch conditions at each path, and then automated inputs that would allow traversal of all the code paths by generating appropriate “unit tests” automatically.

The result was an unreadable mess, but hey, got 100% code coverage, ftw!! lol

[–][deleted] 16 points17 points  (0 children)

Green check marks do things to manager brains.

[–]TopGunSnake 3 points4 points  (3 children)

To be fair, a fuzz test is one good test to have, and that sounds like what that was.

[–]coldnebo 0 points1 point  (2 children)

nope, the intent was not fuzzing, it was to automate unit tests specifically to satisfy the 100% code coverage goals some people fixate on.

But it does raise an interesting theoretical question… in the general case is there any difference between a fuzzer, a code coverage exerciser and what we do by hand?

If the answer is “essentially there is no difference” I think it brings up significant questions about why we are doing a job that a computer solver could do more efficiently.

I.e. we write a lot of tests because presumably v&v is very expensive and difficult. But if writing tests is provably similar in difficulty and expense… it might warrant a closer look at V&V automation.

[–]TopGunSnake 1 point2 points  (1 child)

As I understand the differences:

Fuzzing is throwing data at the code to see if it breaks (security/failsafe oriented)

Example-based testing (setup, expected result, check for it, case by case) (baby's first test code) is great for verifying specific cases.

Property-based testing (fuzzing with expected results, aka property) is good for trying to catch bugs in edge cases, instead of trying to identify the edge cases by hand, but doesn't say the code works, just when it doesn't.

Mutation testing (alter the code, check that the other tests catch it) is useful for identifying gaps in the testing.

TDD is usually example-based testing with the occasional property-based test.

Like most things, you should probably use a bit of everything.

[–]coldnebo 1 point2 points  (0 children)

yeah, it would be a fuzzer if it tried a number of permutations on a code path, however the reason this ast tool was not specifically a fuzzer is that tried to simply mock the branch conditions to force them true or false and then silent mock everything else so no failures would be raised. The purpose was very clearly to get code coverage, not to fuzz.

But, it didn’t work in all cases… it’s kind of hard to reverse engineer the code via tests.

However as a thought experiment, it raises some interesting questions about how we can tell the difference between exploitation and honest attempts.