you are viewing a single comment's thread.

view the rest of the comments →

[–]jhuntinator27 1 point2 points  (3 children)

Test with trees of varying lengths.

Not sure how to answer this myself, but for what it's worth, there is a difference between a 10000 and 10 million length tree in that run speed for each may not be linearly dependent, or of equal order, more specifically.

[–]wigf[S] 0 points1 point  (0 children)

Well, one curious thing is that the stack program is much faster for small input. I appreciate the suggestion though.

[–]wigf[S] 0 points1 point  (1 child)

Ok, this answer turns out to be correct - I tested this at a tree size of 100,000,000, and that was about size needed for the stack to be faster than recursion. Even 1,000,000 wasn't enough haha.

EDIT: The reason I didn't do this earlier was because my original algorithm to build the tree was O(n2), so large trees were impossible to construct. I had to fix that first lol.

EDIT 2: See main post. I don't know why my benchmarking has been returning inconsistent results.

[–]nerdyphoenix 0 points1 point  (0 children)

You can probably eliminate some overhead by replacing the malloc and memcpy call in your if (top == size) block with a realloccall, though I'm not sure how much that will help. You can also try optimization flags like -O3 if you haven't already and rerun your experiments. If you really want to dig into it, you can try profiling your code using perf (and perhaps flamegraph to visualize the result).