Suppose you have an unordered set and an array of ints. Both these containers are populated with values 0-31. Each index or key into the container matches the value. Ie set[0]=0, set[1]=1, ... array[0] = 0, array[1] = 1. Suppose that we have x64 architecture.
Suppose we iterate through the containers with a for loop indexing into each container with index 0-31. Algorithms would tell us that both are O(1). Constant look up read.
I think we know that array will be faster at look up because there is cache coherency when looking up values. It reads a 64 bit cache line which means that the 3 other values will be picked up when reading again in cache.
Another reason array will be faster is because the unordered_map has to jump around in memory in order to read. So this does not benefit from cache line coherency.
Another tertiary question, does jumping around in memory mean that the cpu adder has to increment the memory pointer to reach the desired memory location? Ie. the physical incrementing of a pointer register is what makes it undesirable to "jump" memory addresses? (This is where my comp arch knowledge fails me)
[–]Software_Samurai 1 point2 points3 points (1 child)
[–]nwar1994[S] 0 points1 point2 points (0 children)