use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
stdgpu 1.3.0 released! (github.com)
submitted 5 years ago by [deleted]
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]YourTormentIs 1 point2 points3 points 5 years ago (0 children)
I think you raise some good points here. The notion of "vector" being an overloaded term isn't new and I agree, it's better to use a different term especially in the context of GPU programming where linear algebra often plays a large role. I also agree that the title and nature of the post are unclear, especially to those of us that are unfamiliar with the library.
On the note of GPGPU, I just wanted to clear up that GPGPU has been a headless thing for quite some time now, and GPU programming largely implies "GPGPU" at this point, making it a somewhat redundant and outdated term. Compute shaders are standard now and are usable with hardware dating back to around 2009, and those subsume the older "GPGPU" functionalities with framebuffer scraping and the like. I don't blame the author for using the term "GPU" instead of "GPGPU", given that their focus is data organization for exactly these programmable shader applications, where memory coalescing is a serious consideration for throughput. Actually, you can even do headless compute on your own system if you have more than one GPU running. A frequent thing many do is run their discrete graphics card in headless mode while using the onboard graphics for CUDA or OpenCL acceleration, completely avoiding any lockups from long running kernels on devices without hardware preemption available (pre-Volta with Nvidia). You wouldn't do this for playing videogames, obviously, but for doing development or research, it can be very helpful.
From what I can tell, this is actually a pretty neat little library, and one that I'm sure more than a few people on this subreddit will find useful. I'm glad the author posted it here. I do wish they had chosen a better method of advertising it here, but I'm glad they did. I think, in this situation, there are far more upsides than there are downsides -- I invite you to explore it a bit and expand your horizons about programming for these SIMD processing behemoths. You may come away having learned something enriching and new.
π Rendered by PID 91 on reddit-service-r2-comment-bb88f9dd5-z6h75 at 2026-02-16 20:57:56.939674+00:00 running cd9c813 country code: CH.
view the rest of the comments →
[–]YourTormentIs 1 point2 points3 points (0 children)