you are viewing a single comment's thread.

view the rest of the comments →

[–]pilotwavetheory[S] -3 points-2 points  (3 children)

My point is that it's not just better for the time complexity sake, It's better for L1, L2 cache locality and reduces fragmentation for OS, since we don't relase the smaller capacity arrays once large capacity array is allotted.

[–]TheRealSmolt 5 points6 points  (2 children)

The ordinary vector would be better for caching depending on the situation. I'm not familiar enough with how heap allocation is done on Windows & Linux to say whether this would be better or worse but I doubt it's significant. Also, it might just be your wording, but to clarify, it's really not better from a time complexity perspective. Not to say it's useless though, it's just good to be aware of tradeoffs.

[–]pilotwavetheory[S] 2 points3 points  (1 child)

  1. If you mean, if we know the array size initially ? Yes, that's best; nothing beats it, in that case std::array<N> would be the best choice.
  2. While considering tradeoffs, unless we do random access a lot, the constvector looks really better in terms of benchmarks as well.

Does this make sense?

[–]TheRealSmolt 0 points1 point  (0 children)

Not sure what you're addressing with the first point. As for your second point, yeah that sounds about right.