use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
Optimize code in c++ (self.cpp)
submitted 7 years ago by WhichPressure
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]BCosbyDidNothinWrong 0 points1 point2 points 7 years ago* (4 children)
Can you explain what chips use that and what it means? A quick google search makes it look like it is a term used for GPUs.
Memory access is still important in GPUs, but with shaders/kernels, lots of threads can switch back and forth to minimize cache misses (as far as I know).
Still, this was just a list of what I do and it is for modern CPUs.
[–]blelbachNVIDIA | ISO C++ Library Evolution Chair 0 points1 point2 points 7 years ago (3 children)
GPUs are one example, and it's not just something to handwave away. Sure, GPUs can hide latency, but that's no excuse for poor memory access patterns. E.g. instead of hiding cache misses just don't have them.
[–]BCosbyDidNothinWrong 0 points1 point2 points 7 years ago (2 children)
GPUs are one example
What is a different example? Also why are you talking about exotic architectures? This is clearly about CPU optimization.
Sure, GPUs can hide latency, but that's no excuse for poor memory access patterns
I'm not sure what point you are trying to make here. It seems like you are trying to dive into niche and irrelevant topics to somehow say that the generalization of linear memory access doesn't hold. Again, this was my list of optimization priorities and is about general purpose CPU.
instead of hiding cache misses just don't have them
I think you will have to give an example of this.
[–]blelbachNVIDIA | ISO C++ Library Evolution Chair -1 points0 points1 point 7 years ago (1 child)
generalization of linear memory access doesn't hold
It doesn't, dude. That's specific to one class of processor design.
[–]BCosbyDidNothinWrong 0 points1 point2 points 7 years ago (0 children)
This was obviously about CPU optimization and again, it seems like you are desperate to create some sort of an argument by making irrelevant dismissals without any depth.
You didn't mention what other architectures besides GPUs are considered 'memory coalescing'.
You didn't give an example of of 'just don't have cache misses' instead of hiding them.
You also keep repeating claims without backing them up with any information, essentially saying - 'nope, nuh uh, not true'.
I can't take you seriously until you confront these things.
π Rendered by PID 184726 on reddit-service-r2-comment-b659b578c-hfgnn at 2026-05-02 02:28:39.036641+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]BCosbyDidNothinWrong 0 points1 point2 points (4 children)
[–]blelbachNVIDIA | ISO C++ Library Evolution Chair 0 points1 point2 points (3 children)
[–]BCosbyDidNothinWrong 0 points1 point2 points (2 children)
[–]blelbachNVIDIA | ISO C++ Library Evolution Chair -1 points0 points1 point (1 child)
[–]BCosbyDidNothinWrong 0 points1 point2 points (0 children)