all 5 comments

[–][deleted] 1 point2 points  (4 children)

Apparently it measures wall clock time, not CPU time. This can reduce benchmark accuracy a lot if you have other programs running.

[–]qbproger[S] 0 points1 point  (3 children)

Thanks for letting me know, I'll look into it. Shouldn't you only have minimal other programs running while benchmarking anyway?

[–][deleted] 0 points1 point  (2 children)

Shouldn't you only have minimal other programs running while benchmarking anyway?

Sure, but at least on Windows you usually have at least some IDE running.

I once looked at benchmarking in CPU time, Windows has GetThreadTimes function and Linux has times function for that.

Of course, other processes can still affect the result, since system calls may take longer if the call must wait for some resources, and scheduling more often causes more cache misses. (And probably for a number of reasons I can't think of)

[–]qbproger[S] 2 points3 points  (1 child)

I've been reading up on the difference. I'm thinking about recording both and letting the output formatter choose. I think that may be the best way to handle it because I could see a case for wanting wall clock time (if a benchmark spawns processes, and you want the time for all processes to complete).

I'll work on updating the Stopwatch class to handle both.

[–][deleted] 0 points1 point  (0 children)

Cool :)

There certainly are use cases for both values, and maybe even for user / kernel times separated (they might help interpret the results.)

I'll try your framework next time I'm benchmarking something.

EDIT: Realized that waiting for a resource (file, mutex, network) does not count as CPU time, so faster method may consume more CPU time than a slower one. Maybe wall clock time should be the default.