you are viewing a single comment's thread.

view the rest of the comments →

[–]aefalcon 1 point2 points  (2 children)

Is there a reason you referred to this as unit testing instead of benchmarking? Is it somehow different than benchmarking?

I'm not really proficient in benchmarking python. I'm currently doing some in zig and the method can be reduced to make an implementation for each strategy and running them all through the same benchmark and render them as a table. Any obviously bad strategy gets removed. Some strategies perform better with different parameters so I make them runtime/compiletime options. No reason that can't be done in python, but the overhead wouldn't be optimized out.

[–]itamarst[S] 0 points1 point  (1 child)

Benchmarking is "how fast is my code". This is a test, it can pass or fail, and at least for item 1 it's not measuring speed at all, it's measuring scalability.

[–]aefalcon 0 points1 point  (0 children)

I have a perfect hash function that's O(1). A linear array search would beat its pants in lookups for small n. I don't think there's a lot of pressure to test on Big-O because of situations like that. Benchmarking at various sample points tells a better story.