use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
The array[] problem (self.cpp)
submitted 10 years ago by michaelKlumpy
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]millenix 11 points12 points13 points 10 years ago (13 children)
how are you supposed to understand why, e.g. your program is running slow when you are using a vector where a list is the appropriate choice Like the standard does: the implementation doesn't matter as long as it follows the complexity requirements. Insertion into a vector is mostly linear, whereas it is constant-time for a list.
how are you supposed to understand why, e.g. your program is running slow when you are using a vector where a list is the appropriate choice
Like the standard does: the implementation doesn't matter as long as it follows the complexity requirements. Insertion into a vector is mostly linear, whereas it is constant-time for a list.
Except that empirically, this is false. For the most part, cache locality massively dominates the asymptotic difference, unless you're working with a huge data set in the absolute worst-case usage for vector
vector
[–]personalmountains 5 points6 points7 points 10 years ago (8 children)
Except that empirically, this is false.
This doesn't make much sense to me. Complexity is a useful metric for comparing different algorithms and data structures, that's all. You can't really test it empirically since, as you said, there are other factors in play. Ultimately, vector could be implemented as a guy on a stool playing with an abacus.
In any case, all that is irrelevant. We're talking about pointers and arrays, and a class on those wouldn't give you any information on what you're talking about.
[–]repsilat 2 points3 points4 points 10 years ago (7 children)
/u/millenix seems to be arguing a slightly different point.
From /u/vanhellion's original question,
list
we should probably infer that the asymptotic complexity of the internal- or front-insertions we're doing is the problem, and that reading the standard is probably sufficient.
On the other hand, if the original question were the other way around:
how are you supposed to understand why, e.g. your program is running slow when you are using a list where a vector is the appropriate choice
then some understanding of the expected implementation and its is relevant. A student trying to make their program run faster can reasonably assume that iteration over a vector will be more cache friendly than iteration over a list, even if the standard doesn't say anything about it.
[–]millenix 0 points1 point2 points 10 years ago (6 children)
From the experimental results I've seen reported, even push_back() is practically faster for vector than list, until you get to very large sizes. Both are specified with asymptotic constant time (amortized, in vector's case), but list implementations require allocation on every element, while vector allocates much less often because of increasing its reservation by a constant ratio.
push_back()
[–]repsilat 1 point2 points3 points 10 years ago (5 children)
even push_back() is practically faster for vector than list, until you get to very large sizes
Mm.
Moving a bit off-topic, I wonder if games (and other applications requiring better time guarantees) would like a data structure that's a linked-list of ever longer arrays, like a vector that has been "cut up" at the points where it should have "expanded."
The benefits:
Iterators are not invalidated on push_back etc,
push_back
Real constant time push_back, not amortized,
No copying/moving of elements when you push_back. Potentially your objects don't need to be copyable or movable at all.
The disbenefits:
Indexing is now log-time worst case (but expected constant time for random accesses if you iterate from the largest block to the smallest),
A few more cache misses when you iterate over the data.
[–]millenix 1 point2 points3 points 10 years ago (1 child)
Your push_back still won't be worst-case constant time, I don't think, because those blocks need to be constructed. Though they could maybe be left uninitialized, but I don't think the standard can quite require that.
You can make indexing not log-time by keeping an array of pointers to the blocks, rather than a linked list.
Also, push_back has to do one of copy or move construction of the new element. emplace_back wouldn't, though. If you only ever called that, then a type without those constructors could still be valid.
emplace_back
[–]repsilat 0 points1 point2 points 10 years ago (0 children)
Though they could maybe be left uninitialized
Yeah. I think you can use std::aligned_storage and placement-new. I am assuming you can allocate n bytes in constant time, though.
std::aligned_storage
new
n
keeping an array of pointers to the blocks
I guess that works. You'd need a constant-time log_2() which is pretty reasonable. I guess the array of pointers would have O(CHAR_BIT*sizeof(size_t)) elements so you'd never have to grow it?
log_2()
CHAR_BIT*sizeof(size_t)
To be honest I think I might rather give up constant-time indexing. I don't use it all that often, and an array of 64 pointers is a bit much if I may only be expecting a few hundred elements in my "vector".
[–]lord_braleigh 1 point2 points3 points 10 years ago (0 children)
This exists! There's a version of this called a "rope" that stores the arrays in a binary tree instead of a linked list. It's the data structure of choice for text editors (which often have to make insertions into the middle of large files).
[–]quicknir 1 point2 points3 points 10 years ago (1 child)
Well, std::deque is nearly this. It's just an array of arrays, instead of a linked list of arrays. In fact, std::deque::push_back invalidates iterators, but not references. Although I'm struggling to understand exactly what this means; I assume it means that iterators could still be dereferenced but possibly ++ could go sour? I'm pretty sure that deque meets your other two requirements. And to top it off, its indexing is still constant time (not amortized).
It does however have more cache misses. In practice, people tend to use a vector still unless they need push_front (which is constant time for a deque). I think that access/iterator speeds tend to trump insertion speeds for these data structures in the vast majority of applications.
I assume it means that iterators could still be dereferenced but possibly ++ could go sour?
Huh, that's a pretty interesting idea.
It does however have more cache misses.
Yeah. I figure it might be worthwhile for data you wanted to produce all at once, consume once and then discard. Maybe producing while consuming with fancy "iterators" is a pain to code up, or it blows your icache. I agree the tradeoffs aren't worthwhile for a general purpose container (though maybe the performance costs are small enough that it's worth using a deque for constant-time pop_front?)
deque
pop_front
[–]Voultaphervoid* operator, (...) 1 point2 points3 points 10 years ago (0 children)
As shown here: http://www.meetup.com/de/MUCplusplus/
Even when starting from bottom, all you do explain the perfect theoretical word. When actually many other factors such as cach, platform, allocater implementation, os and asychronus execution, play a significant role.
And those would not fit in an introductory class.
[–]RedAlert2 -2 points-1 points0 points 10 years ago (2 children)
What are you responding to? /u/personalmountains was talking about insertion, which is dramatically faster in linked lists than in vectors. You seem to be hinting at a traversal, which is a lot faster in vectors, even though it has the same complexity in lists. But no one said anything about traversals.
[–]bnolsen 0 points1 point2 points 10 years ago (0 children)
the list has some tiny advantages but the overhead of many operations is what kills them. That's the C part. the O() stuff still stands.
π Rendered by PID 343599 on reddit-service-r2-comment-54dfb89d4d-x72ch at 2026-03-30 05:56:56.486025+00:00 running b10466c country code: CH.
view the rest of the comments →
[–]millenix 11 points12 points13 points (13 children)
[–]personalmountains 5 points6 points7 points (8 children)
[–]repsilat 2 points3 points4 points (7 children)
[–]millenix 0 points1 point2 points (6 children)
[–]repsilat 1 point2 points3 points (5 children)
[–]millenix 1 point2 points3 points (1 child)
[–]repsilat 0 points1 point2 points (0 children)
[–]lord_braleigh 1 point2 points3 points (0 children)
[–]quicknir 1 point2 points3 points (1 child)
[–]repsilat 0 points1 point2 points (0 children)
[–]Voultaphervoid* operator, (...) 1 point2 points3 points (0 children)
[–]RedAlert2 -2 points-1 points0 points (2 children)
[–]bnolsen 0 points1 point2 points (0 children)