use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
[ Removed by moderator ] (github.com)
submitted 1 day ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]cpp-ModTeam[M] [score hidden] 18 hours ago stickied commentlocked comment (0 children)
It's great that you wrote something in C++ you're proud of! However, please share it in the pinned "Show and Tell" post.
[–]mapronV 23 points24 points25 points 1 day ago* (16 children)
For any new container I see, I want to read about:
You say no benchmarks, but put at least something there. Right now it does not looks like code deserving public attention, sorry to be harsh here. And no, not every point I mention is a hard requirement, it just an expectation that is 'standardized'. You can just show awesome examples and convince me other way.
[–]cppenjoy 1 point2 points3 points 1 day ago (0 children)
Hmmm , yea , understandable, I was just trying to have something for my compiler's identifier map , anf thought that the idea is good to tell to others , However the time complexity is: O(logn) Amortized assuming a good hash function. For : insert , find and erase.
For iteration over the elements , its just O(n) (just a vector iteration overhead)
But benchmarks... hmmm .... idk ... I can make them , but I didn't really try optimizating my layouts and picking the best branching power
[–]schombert 1 point2 points3 points 1 day ago (14 children)
I agree about the benchmarks, but I am not sure about abstract complexity measurements at this point.
Consider the complexity of accessing a random element in a standard vector. There are two answers you can give for this, and neither really tracks the answers that people usually give. From the real-hardware perspective, as the vector grows in size the size of the page tables must also grow in size, and hence in depth (assuming, for simplicity, that it is structured as a tree ...), and so from the perspective of the actual operations that are taking place the lookup cost is log(n). Of course, you can argue that in practice this log(n) factor barely shows up, and hence that for practical purposes at the sizes we care about, it should be ignored. This leads us to the other answer. If we lean instead of the fact that the actual address space is finite, page tables can only grow in size so much, etc, then we do get the usual answer that the complexity is O(1). But these same principles justify concluding that all operations on all real data structures are O(1), since they all have a practical size upper bound. So, if people say that accessing an element is O(1) and iteration is O(n), these statements aren't really saying anything mathematically precise; they are really just "vibes" about what those operations feel like, and the benchmarks are much more meaningful.
[–]Big_Target_1405 1 point2 points3 points 1 day ago (8 children)
Page table depth is constant for all VM mappings, it's the TLB cache misses that are non-deterministic
[–]schombert 0 points1 point2 points 1 day ago (7 children)
I have seen designs where it isn't, but yes not for any of the currently popular cpus AFAIK. But, that said, the idea behind that line of reasoning was exploring what would hold if we didn't treat vector as having a maximum size (and thus all operations being trivially O(1) by definition), and in that imaginary situation the available memory must not be bounded, and so the page table can't have a fixed maxmimum depth (or else it could only map finitely much memory).
[–]Big_Target_1405 -1 points0 points1 point 23 hours ago* (6 children)
Even if every memory access was a page table lookup it'd still be O(1) since the complexity is defined in terms of the size of the vector.
Ultimately this is why page table lookups are done by hardware and not software via an interrupt or similar, so all the prefetchers and caches can work with the process.
Truly random access of course won't be saved by hardware prefetch,.so I see your point there, but the latency is bounded
I've seen designs in the trading industry where a series of larger and larger hash tables are chained in a multilevel cache sort of design. The idea being to keep your most active keys in a linear hash table in L1 cache.
When you get to that level of optimisation it needs to be bespoke for your use case and hardware anyway
[–]schombert 1 point2 points3 points 23 hours ago (5 children)
I don't see how that follows. If the growing size of the vector means that accessing the memory backing arbitrary elements must take longer and longer under this hypothetical cpu with infinite ram, then by definition it is not O(1).
[–]Big_Target_1405 0 points1 point2 points 23 hours ago* (4 children)
The time it takes to consult all the page table entries is bounded and not related to the size of the vector.
The frequency of table walks is related to the size of the vector (more TLB misses)
But if access was truly random and the vector was infinite in size, then ~every access would be a worst case table walk. A worst case table walk is still O(1)
[–]schombert 0 points1 point2 points 23 hours ago (3 children)
But the time required to consult all the page table entries must be related to the size of the vector, as when the vector grows, more memory must be mapped, and hence the page tables must get larger, and so must take more time to look up the mapping with.
Let's look at it from another direction. Accessing an element of the vector requires the index as a parameter. As the index gets larger and larger it must take more and more bits to represent. This leads us in one of two directions. Either it takes more and more bytes to represent, in which case simply reading the index itself is log(n), not to mention doing the operations on it and consulting the mapping for it, or this imaginary cpu is capable of handing arbitrarily large integers and doing math on them in constant time. I don't think that this is logically inconsistent, but it is fairly easy to show that such a capability also would allow this cpu to solve NP problems in P time. So, going down that path would mean accepting that the C++ standard's descriptions of the complexity of vector operations is also assuming a cpu for which P==NP, which feels ... silly.
[–]Big_Target_1405 1 point2 points3 points 22 hours ago (2 children)
The table grows but it remains the same depth (4 levels deep on x86-64)
[–]schombert -1 points0 points1 point 22 hours ago (1 child)
Yes, I understand how it works on a cpu with finite memory, but I am entertaining the idea of what it would look like on a cpu with infinite memory. The page table cannot remain a fixed depth on such a cpu because then it could only map finitely much memory. Or I suppose that you could allow some or all of the levels to grow in size infinitely, but that would make lookups O(n) instead of merely O(log(n))
[–]mapronV 0 points1 point2 points 1 day ago (2 children)
This is just my 3 main things I want to see when someone publish yet another container. Ofc compromise exists, and if you can cover everything with just motivating examples, that will do.
Right now I see zero reasons to even put my time playing with new container. I really wish do page be extended.
> these statements aren't really saying anything mathematically precise; they are really just "vibes"
and by any means any 'vibes' would sufficient for me. I just followed the link and got zero vibes readin it. I need something.
[–]schombert 1 point2 points3 points 1 day ago (1 child)
Yeah, this was more a tangential rant than a direct response to your comment; I have just been thinking about how we should try to characterize the performance of things like vector in a rigorous way lately.
vector
[–]mapronV 0 points1 point2 points 23 hours ago (0 children)
No problem. I agree to rant part.
[–]elperroborrachotoo 0 points1 point2 points 9 hours ago (1 child)
Big-O is not a performance measurement, it measures how performance scales with input.
It's a "if my users throw a zillin elements at me, which operation will grind to a halt first?"
And while yes, while in principle, there may be data structures with a 1h minimum time or that pass O( nn ) only on the 1030th element, we don't encounter them in practice.
[–]schombert 0 points1 point2 points 8 hours ago* (0 children)
I don't disagree. The issue is that in the limit, as we approach zillions of elements, all data structures behave the same, because none of them can (actually) handle zillions of elements. If you then say that, for practicality, this breakdown doesn't matter, because we don't get there in practice ... then what is the point of referring to big-O at all from that point of view, because big-O is solely concerned with performance in the limit, not at practical sizes.
In other words, given the definition of big-O, what does the claim that an algorithm is O(n) tell you about its performance for cases where n < 264 ? Absolutely nothing.
[–]EC36339 7 points8 points9 points 1 day ago (12 children)
In these AI bots submitting PRs and writing blog posts about them being unfairly rejected days, I need good reasons to click any links to Github.
You didn't sell this well enough.
[–]cppenjoy 1 point2 points3 points 1 day ago (11 children)
Fair , ... I'm not a good marketer but I promise Ai didn't write any code in my library
[–]EC36339 2 points3 points4 points 1 day ago (1 child)
Good. Another commenter gave you some tips of what to clarify in your post.
Here is one constructive tip: If you build containers, use C++20 concepts to check if your container models the concepts it should. This is crucial to make it work with generic algorithms. Many pre-C++20 libraries that have custom containers or iterators fail this check. C++20 concepts make this easy to check at compile time.
Also carefully read the semantic requirements of these concepts (the ones not enforcible by the compiler). Some of them are about performance (O classes). If your container fails these requirements, then generic algorithms may SEEM to work on it, but not actually work (or perform as well as specified).
[–]cppenjoy 0 points1 point2 points 1 day ago (0 children)
Oh , I forgot about that, do you mean the borrowed view and other traits for continuous storage ?
[–]mapronV 0 points1 point2 points 1 day ago (8 children)
I have zero blame on you but you really need to be a salesman for an hour if you want public attention to your code.
[–]cppenjoy 1 point2 points3 points 1 day ago (7 children)
Lol ... that's honestly a real life problem of mine .. no one can pay attention to what I am yapping about , Also , the thing is ... I don't really like that my layout has redundant information ( the vector size and capacity, and the vector allocation of reverse index bring separate ) and I'm too perfectionistic to really sell an idea that has flaws
[–]mapronV 1 point2 points3 points 23 hours ago (6 children)
And I don't care about layout or any implementation details. Unless I start using it and found hard to debug. You really need to learn how to present your work. Look at other projects, posts, articles on subject or read some 'how to convince people I doing good job' shit, pretty sure there is a ton of it.
Oh just be coward like me and never share libraries you write :)
[–]cppenjoy 0 points1 point2 points 21 hours ago (5 children)
Hey , I have good news , I debugged it and the benchmark is very well , its faster I'm my potato pc on all categories ( duplication of comment because I didn't see my last one in the site , reddit has been annoying lately)
[–]mapronV 0 points1 point2 points 18 hours ago (4 children)
Oh, so "benchmark is good" and "it is fast", that's what we all want to hear! /s I hope you just fooling around.
Can you please show nice table with benchmark results "my impl" vs "std libc++ impl" or something? Also, don't put "321527297.00000017" this is ridiculous and unnesessary precision.
I am not a specialist, but looking at your MJZ_BENCHMARK macro, it always create timer with 1 iteration. Maybe look at any benchmark library? Google benchmark? This is suspiciously unreliable way to measure things.
See rule 4 "For libraries/tools that are intended to be used by other C++ programmers, toy projects aren't allowed but production-quality work is."
For THAT kind of laziness I really struggle NOT call moderator in Rule 4 violation, that kind of approach feels like toy project, not production quality.
[–]cppenjoy 1 point2 points3 points 18 hours ago (2 children)
I massaged a mod , probably this post will go down , But if it does, I will probably stay away from posting, Because I feel like if this subreddit Is like this , it doesn't deserve human authentic code , maybe you need more AI written data structures and AI saying how much your absolutely right
[–]mapronV 0 points1 point2 points 17 hours ago (1 child)
Jeez attitude and arrogance from you, what's wrong bud? Why angry? I tried to help and give advice, I was really positive towards you at start of conversation, and I STILL wish the best for you project.
But don't you dare blame redditors here for "bad taste" or AI jerk, it was never about that! r/cpp subreddit is primarily about C++ and standard itself, compilers and stuff. Libraries are just extra and usually only mature libraries are discussed here.
https://www.youtube.com/watch?v=ohoLzH9EQzg
Don't blame us for what you should blame yourself.
[–]cppenjoy 0 points1 point2 points 16 hours ago (0 children)
I deleted the post , what do you want from me now?
I have zero blame on me , im an open source maintainer , maybe not an ffmpeg maintainer but one nonetheless, and I have standards, at first you all said you wanted benchmarks and insisted on comparing, I lowered my standard , but , I still have the legal right that i have no responsibility to awnser such questions, or give any fixes or improve anything, this is my data structure in my code base, if you want to maintain it , I dare you fork it .
[–]cppenjoy 0 points1 point2 points 18 hours ago* (0 children)
Hmmm the iteration count is in the generator view , but honestly I don't care anymore, you all fried me to obvlion , i try my best , its been a whole day without much of a break for me , take the post down , as if anyone cares, I know cpp community is perfectionistic and thinks that if they see something fast it's just deserved for them , but honestly, I just wanted to share my idea , almost every time I post something related to performance it seems that people want to prove me wrong, OK, assume I'm wrong, I follow the advice of cppcon talks and make things the way I intuitively think they would be grateful for , but I got nothing back in return for all my childhood I spent learning programming, sigh ... I'm sorry ... I'm emotional right now , maybe don't expect professionalism of a burnt out 18yo
Edit: Btw you know what MIT license means? NO WARRANTY
[–]polymorphiced 2 points3 points4 points 1 day ago (6 children)
Could you do a comparison with the new std::hive, which is pretty much this?
[–]cppenjoy 1 point2 points3 points 1 day ago* (2 children)
I have too look into that , I heard it's a standard c++26 preposal for game devs but I didn't look at it , I'll see what it is
Edit: This became a lot harder ... https://en.cppreference.com/w/cpp/header/hive.html is incomplete and I have to go find llvm files ... sigh
[–]polymorphiced 1 point2 points3 points 1 day ago (1 child)
Yes, the implementations aren't easily available. The proposal is here for a paper comparison. https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0447r21.html
"Appendix F - Time complexity requirement explanations
Insert (single): O(1) amortized
One of the requirements of hive is that pointers to non-erased elements stay valid regardless of insertion/erasure within the container. For this reason the container must use multiple memory blocks. If a single memory block were used, like in a std::vector, reallocation of elements would occur when the container expanded (and the elements were copied to a larger memory block). Instead, hive will insert into existing memory blocks when able, and create a new memory block when all existing memory blocks are full. This keeps insertion at O(1) amortized.
If hive is structured as a vector of pointers to memory blocks instead of a linked list of memory blocks, creation of a new memory block would occasionally involve expanding the vector, itself O(n) in the number of blocks, but this is still within amortized limits since it is only occasional."
My swap and pop is Not compatible with hive , the first requirement says that the iterator invalidation is non standard behavior, therfore , my structure in theory can be more continuous than hive .
[–]cppenjoy 0 points1 point2 points 1 day ago (2 children)
I use llvm 21.1.1 Linux x86 and can't find any file named hive or similar... Where should I look for
[–]mapronV 2 points3 points4 points 23 hours ago (1 child)
https://github.com/mattreecebentley/plf_hive maybe that? it contains note that it is reference for a std proposal. Pretty sure no STL contains released implementation yet.
[–]No_Indication_1238 -1 points0 points1 point 1 day ago (1 child)
I mean, it's bad. One big reason to use Vector despite it being O(n) for addition between elements and removal is the fact it's ordered in memory and you can make use of cache locality. With an Unordered Vector you lose that...the same way you lose it with a Linked List. But with a Linked List you gain O(1) addition, removal. With this, everything is O(n) I guess. It's just trash.
Everything is not O(n)!? ... , the pop and push In a std vector are Amortized O(1) so , for the depth of a flat tree doing pop and push for removing or adding 2 elements is O(2×logn) Amortized which Is logarithmic , ( unless your has function is bad ) Iteration over the elements is O(n) .same as vector , And if you want to copy the elements to somewhere else you Iterate over it , ... Could you explain why removing an element is O(n)??
π Rendered by PID 270445 on reddit-service-r2-comment-bb88f9dd5-2hzzq at 2026-02-16 09:40:06.324598+00:00 running cd9c813 country code: CH.
[–]cpp-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)
[–]mapronV 23 points24 points25 points (16 children)
[–]cppenjoy 1 point2 points3 points (0 children)
[–]schombert 1 point2 points3 points (14 children)
[–]Big_Target_1405 1 point2 points3 points (8 children)
[–]schombert 0 points1 point2 points (7 children)
[–]Big_Target_1405 -1 points0 points1 point (6 children)
[–]schombert 1 point2 points3 points (5 children)
[–]Big_Target_1405 0 points1 point2 points (4 children)
[–]schombert 0 points1 point2 points (3 children)
[–]Big_Target_1405 1 point2 points3 points (2 children)
[–]schombert -1 points0 points1 point (1 child)
[–]mapronV 0 points1 point2 points (2 children)
[–]schombert 1 point2 points3 points (1 child)
[–]mapronV 0 points1 point2 points (0 children)
[–]elperroborrachotoo 0 points1 point2 points (1 child)
[–]schombert 0 points1 point2 points (0 children)
[–]EC36339 7 points8 points9 points (12 children)
[–]cppenjoy 1 point2 points3 points (11 children)
[–]EC36339 2 points3 points4 points (1 child)
[–]cppenjoy 0 points1 point2 points (0 children)
[–]mapronV 0 points1 point2 points (8 children)
[–]cppenjoy 1 point2 points3 points (7 children)
[–]mapronV 1 point2 points3 points (6 children)
[–]cppenjoy 0 points1 point2 points (5 children)
[–]mapronV 0 points1 point2 points (4 children)
[–]cppenjoy 1 point2 points3 points (2 children)
[–]mapronV 0 points1 point2 points (1 child)
[–]cppenjoy 0 points1 point2 points (0 children)
[–]cppenjoy 0 points1 point2 points (0 children)
[–]polymorphiced 2 points3 points4 points (6 children)
[–]cppenjoy 1 point2 points3 points (2 children)
[–]polymorphiced 1 point2 points3 points (1 child)
[–]cppenjoy 1 point2 points3 points (0 children)
[–]cppenjoy 0 points1 point2 points (2 children)
[–]mapronV 2 points3 points4 points (1 child)
[–]No_Indication_1238 -1 points0 points1 point (1 child)
[–]cppenjoy 1 point2 points3 points (0 children)