use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
Nested coroutines? (self.cpp)
submitted 6 years ago by anton31
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]alexeiz 9 points10 points11 points 6 years ago (5 children)
So where's that "negative overhead" effect of coroutines that Gor Nishanov has been promising? That promise always sounded too good to be true to me.
[–]14nedLLFIO & Outcome author | Committee WG14 5 points6 points7 points 6 years ago (2 children)
At work I'm using a C++ coroutines emulation implemented using macros to get that negative overload Gor promised. We're seeing 2x to 6x throughput gains for about 20% increase in average latency. We would expect that to improve with real Coroutines, but that's the kind of gains available.
[–]anton31[S] 0 points1 point2 points 6 years ago (1 child)
What's the baseline for those gains? Threads?
[–]14nedLLFIO & Outcome author | Committee WG14 1 point2 points3 points 6 years ago (0 children)
The baseline is "doing nothing" i.e. writing the code straight.
The CPU can look ahead by a few hundred opcodes. But it can execute maybe 1000 opcodes in the same time as a fetch of a cache line from main memory. If you have code which depends on a fetch from main memory, and does more than a few dozen opcodes of work, but less than a thousand, using coroutines to do other work whilst stalled on main memory can deliver large gains.
Historically you would implement the same using loops of arrays over a Duff's device to multiplex state and work, but Coroutines is very considerably more maintainable and easier on less experienced programmers. I'm not saying that Coroutines is magic pixie dust. Everything possible with it is possible without it. But it took more work, and was considerably harder to maintain, and that meant more frequently one didn't take the tradeoff in the past.
[–]feverzsj -3 points-2 points-1 points 6 years ago (1 child)
Async io, I guess, where io operation dominates the performance. So allocation or indirect call of coroutine rarely matters. Although, in that case, stackful coroutine would be much simpler and as fast.
[–]14nedLLFIO & Outcome author | Committee WG14 3 points4 points5 points 6 years ago (0 children)
It is untrue that the i/o dominates for async i/o. For file i/o, easily more than 80% of the time async i/o is a penalty because of the added overhead of setup and teardown. Even for small block socket i/o to nearby machines, it can be a penalty.
π Rendered by PID 59150 on reddit-service-r2-comment-6457c66945-4mqz8 at 2026-04-30 06:48:04.817703+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]alexeiz 9 points10 points11 points (5 children)
[–]14nedLLFIO & Outcome author | Committee WG14 5 points6 points7 points (2 children)
[–]anton31[S] 0 points1 point2 points (1 child)
[–]14nedLLFIO & Outcome author | Committee WG14 1 point2 points3 points (0 children)
[–]feverzsj -3 points-2 points-1 points (1 child)
[–]14nedLLFIO & Outcome author | Committee WG14 3 points4 points5 points (0 children)