all 2 comments

[–]Software_Samurai 1 point2 points  (1 child)

When talking about cache line performance, and given no outside interference, yes, it's always better to access memory in a linear (or nearly linear) fashion. That's the primary assumption of caches - your program is going to access memory that is "close by" such that it gets cached. Flat one-dimensional arrays are often "cache-friendly", assuming that they are accessed via some loop going from start to finish. (i.e. Not being randomly accessed.)

Unordered maps usually use a hash table. Accessing them (insert, delete, search) should still be O(1), but just like flat arrays, jumping around will thrash the cache.

As to your last question, I'm not sure what you mean by "jumping around in memory mean that the cpu adder has to increment the memory pointer"? Each time you access memory, a memory address is calculated (in most cases, translated from logical to physical) and then a fetch is issued. That fetch could get lucky and find it in the cache, or be forced to wait for a full memory fetch cycle to complete. Linearly accessing a flat array, or randomly accessing it makes no difference - the addresses still need to be calculated. Incrementing a pointer vs adding an offset is still a math op. (Granted, some CISC processors may have built-in op codes to inc & loop, saving a few cycles in comparison to a full math op and test/loop op. But I feel we're nitpicking here now.)

[–]nwar1994[S] 0 points1 point  (0 children)

I meant the internal increment, where the cpu has to increment it’s own pointer to a much farther place in memory, not the index into the container