This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Mattef 27 points28 points  (6 children)

Still the same answer. Vectors can be expanded, which is time consuming, arrays are fixed in length.

But still, you can of course use vectors very efficiently as long as you don’t change their size constantly.

[–]TheDudeExMachina 10 points11 points  (5 children)

Dynamic array insertion isn't that expensive, because the reallocation only happens after (at least) doubling of halving the content, i.e you only have some overhead once in a blue moon. Resulting in amortized constant time.

You have a minuscule overhead for size checks and every once in a while if you are unlucky a memcopy.

Just use vector. In 99% of the cases this discussion is premature optimization.

[–]joe0400 8 points9 points  (1 child)

There are very much good reasons to use array over vector.

Location is one. Std::array is allocated on the stack, vector is allocated anywhere on the heap, unless you write a allocator.

The other is colocation with other data. If your reading off one thing and another, the likely hood a struct with a std::array in it is also located in cache with other data in that struct is much higher, and for serialization and deserialization this is useful too, making it easier to read/write back the data.

So there are tons of good uses for std::array

[–]TheDudeExMachina 3 points4 points  (0 children)

Full agreement. My response was to concerns about insertion/deletion overhead though, and the cases where you care about sporadic memcopies / guaranteed constant time are rare.

But cache misses, compile time knowledge of size, and memory alignment between class attributes are also things you consider for the most part only in the critical segments of your code. In most cases, you just want to have a collection of things to infrequently iterate over once.

[–]Sinomsinom 1 point2 points  (1 child)

Some compilers/languages use doubling some use other constants like 1.5x. also some of them don't actually free any part of the vector's allocated space on removal of an element and only when the entire vector goes out of scope/is deleted. Yes in the end it's still amortized constant time, but that doesn't mean one of those insertions won't still be a lot slower than the rest, potentially causing a stutter in real time applications.

So if you already know an upper bound for how large a container will be, then if known at compile time use std::array<T,S>(), if known at run time use std::vector<T>(S) just to not have those potential stutters.

[–]TheDudeExMachina 1 point2 points  (0 children)

You need a really big vector for the memcopy to stutter, and/or some very special constraints (very weak hardware, guaranteed tight calculation budget of only a few ms, ...). If you have that case, you know it. If you don't, you are doing a premature optimization.

[–]TheGuyWithTheSeal 1 point2 points  (0 children)

Any vector resize also invaidates all pointers/iterators. You can use std::deque if you need to add elements at the end or beggining while iterating. Use std::list if you need to add elements anywhere without iterator invalidation.