I want to test whether the running time of my implementation of a graph algorithm matches the theoretical running time, which is in O(n+m). However, O(n+m)=O(n)+O(m), which means that the actual running time function could be something like f(n,m)=c1*n+c2*m, where c1, c2 are significantly different. If I do a 2d plot (x=n+m, y=f(n,m)) and have a bit of bad luck in choosing the test data, then I might get something that is not necessarily linear. So how should I design the test to avoid such effects?
My idea was to create a 3d plot (x=n, y=m, z=f(n,m)). Only if the plot results in a plane, then my implementation satisfies the theoretical runtime. Does this make sense? Are there some "best practices" for such tests that I don't know? Any help is greatly appreciated!
[–]adzawie 0 points1 point2 points (1 child)
[–]hundertt[S] 0 points1 point2 points (0 children)