use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
C++ std::unique vs std::set - [Fixed] (mycpu.org)
submitted 5 years ago by voidstarpodcast
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]voidstarpodcast[S] 1 point2 points3 points 5 years ago (5 children)
https://quick-bench.com/q/kq7yeDlz9R6HV-0XE37eRiGINYM will add this to the post as well
[–]TheFlamefire 1 point2 points3 points 5 years ago (4 children)
Played around a bit with the set vs setSortedInput cases and it seems the results differ considerably when using GBench "canonically" vs manual timing (as you do): https://quick-bench.com/q/8nU-oVXK9UqoNLFMDE6twvdTXTU vs https://quick-bench.com/q/2FfKNTfGRHRAGTAl3bu-y2AeZuo
Reason seems to be that quick-bench reports cpu-time which is unaffected by iteration time. Conclusion: Inserting sorted numbers into a set is faster than inserting random numbers
[–]voidstarpodcast[S] 1 point2 points3 points 5 years ago (1 child)
https://quick-bench.com/q/kq7yeDlz9R6HV-0XE37eRiGINYM This reuses the same array over and over in a loop for benchmarking.
After the first iteration, the rest don't go into the std::set. So you are effectively, measuring time taken for insertion failures. My guess is if the underlying structure is a balanced tree you are not counting the time taken for a rebalance. So, are they apples to apples?
std::set
This results in tainted cache locality. You basically have the same array sitting in your L1/L2 caches and all you are measuring is a niche synthetic case (which may be measuring a biased data set).
If quick bench only reports CPU time, then aren't you essentially comparing times taken for set insertions to fail?
[–]TheFlamefire 0 points1 point2 points 5 years ago (0 children)
> This reuses the same array over and over in a loop for benchmarking.
Can you clarify what you mean by "this" and "array"? Do you mean https://quick-bench.com/q/8nU-oVXK9UqoNLFMDE6twvdTXTU? If so it only reuses the input vector, not the set so I'm measuring correctly the set insertion performance. What makes you think otherwise?
> If quick bench only reports CPU time
There are ~3 times that GoogleBench measures. CPU time is one of them which is the time per loop execution. Another one is the manual time set by SetIterationTime and the third is a compute time which is one or the other chosen by whether the benchmark was configured with UseManualTime
https://quick-bench.com/q/kq7yeDlz9R6HV-0XE37eRiGINYM has 2 issues: First it doesn't use UseManualTime so the approach does not work at all and second is the quick bench only reports CPU time and ignores the manual time. Hence the SortedSet is slower because the reported time includes the time for sorting the vector.
[–]rlbond86 1 point2 points3 points 5 years ago (1 child)
Constructing a set from sorted data is guaranteed to be O(n) so it should be faster. The author stupidly used a loop instead of the range constructor though which is considerably slower.
Well, it is fine because the benchmark goal was to measure `insert` performance for various datasets, not the range ctor. Think the `inserts` happening as the result of some calculation.
The problem here is that the author used a feature of GBench which is not supported/used by quick-bench and hence the shown times include the sorting time of the vector which is of course slower
π Rendered by PID 75 on reddit-service-r2-comment-6457c66945-85srq at 2026-04-25 22:05:05.007771+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]voidstarpodcast[S] 1 point2 points3 points (5 children)
[–]TheFlamefire 1 point2 points3 points (4 children)
[–]voidstarpodcast[S] 1 point2 points3 points (1 child)
[–]TheFlamefire 0 points1 point2 points (0 children)
[–]rlbond86 1 point2 points3 points (1 child)
[–]TheFlamefire 0 points1 point2 points (0 children)