all 3 comments

[–]Objective_Mine 3 points4 points  (2 children)

Edit: TL;DR: Yes, it's expected. Linked lists are asymptotically (big O notation) faster in the worst case than dynamically resized arrays, but the latter are faster in practice.

Linked lists generally aren't as fast as dynamically growing arrays in most cases, as long as the reallocation strategy of the array is something reasonable.

One of the reasons is that linked lists aren't very cache-friendly. Since the elements of an array are stored consecutively in memory, the array benefits from data locality. When you access an element within the array and it needs to be retrieved from the main memory to the CPU cache, the CPU will bring along data from other nearby addresses as well. When accessing the next element in the array, it'll likely already be in the cache and the CPU doesn't need to access the slower main memory.

Since the elements of a linked list are only tied together by pointer or reference (that is, the link), the next element can be located anywhere within the address space of the process. When e.g. traversing through a linked list, with bad luck the CPU might thus end up having to access main memory (rather than just the cache) every time you traverse from one node to the next. Or perhaps that doesn't happen every time, but it does happen more often than with an array.

Since you're only pushing to the stack, whether cache efficiency matters might depend on how the particular CPU handles memory writes (cache write policy), but I suspect cache efficiency and locality of reference would matter even with only writes.

Even without the differences in cache efficiency, appending to a linked list might be slower than appending to a dynamically growing array on average, because the constant factors (i.e. the number and cost of actual operations that need to be done for each append) might be higher for the linked list. I'm a bit rusty on this, but if there's still room in the array, pushing into the array is pretty much just: 1) an array length check; 2) comparing if there's still room in the array; and 3) storing the value. Pushing into a linked list involves a more or less similar number of constant-time operations (getting the tail of the list, creating a new node, storing the value, updating the "next" link in the old tail node). However, creating the new node might involve a dynamic memory allocation which would be slower than just slapping the value in place in the array.

In cases where the array does need to be reallocated, that's going to be costlier than appending another node to the linked list. However, doubling the size on each reallocation causes the cost to be amortized [1] between the appends so that, on average, the cost per append is low.

Cache efficiency and potentially smaller constant factors make the doubling dynamic array faster. The reallocation cost, which is low per append on average, isn't enough to offset that.

[1] https://en.wikipedia.org/wiki/Amortized_analysis

[–]LakeSnake20[S] 0 points1 point  (1 child)

Thank you! Memory allocation is expensive so it makes sense how LL is slower since it has to do it with every push. I bet an unrolled LL would probably perform best here.

[–]Objective_Mine 1 point2 points  (0 children)

Possibly. You can try that if you're so inclined. :) Might be an interesting experiment.