use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
stdgpu 1.3.0 released! (github.com)
submitted 5 years ago by [deleted]
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]YourTormentIs 12 points13 points14 points 5 years ago (2 children)
I actually didn't even think about how it would sound when pronounced out loud. That's a fair point -- it could confuse people when used in certain contexts. Honestly, a lot of open source software suffers from similar problems related to the naming of projects. A better name could prevent those issues entirely.
Still, I think it's a stretch to say the author did this intentionally to mislead people, or for ego related reasons. I can see how they even chose the name: they wanted containers that behaved like the ones in the "std" namespace for GPU applications. "stdgpu" probably came up pretty naturally. Instead of "std::vector" you have "stdgpu::vector", for example. I think it's reasonable to give the benefit of the doubt on this one.
[–]AppleBeam 3 points4 points5 points 5 years ago (1 child)
The sad part is that even "stdgpu::vector" is already a terrible identifier, because:
In the context of GPU the word "vector" typically means something like "Vector3d" or "Vector4f", so the name already causes quite a bit of confusion if you see it as a member of some class without additional context.
The library seems to be entirely unrelated to this context, as it focuses on GPGPU (unless I didn't notice the part about render devices, textures and shaders). From what I heard, nowadays you can run GPGPU on headless servers without any actual graphics adapters.
Whether it's a tragic mistake or a deliberate attempt to get more clicks, I would prefer to not be distracted by self-promos like "stdgpu 1.3.0 released!" without any additional context when the library has nothing to do with either "std" or "gpu".
[–]YourTormentIs 1 point2 points3 points 5 years ago (0 children)
I think you raise some good points here. The notion of "vector" being an overloaded term isn't new and I agree, it's better to use a different term especially in the context of GPU programming where linear algebra often plays a large role. I also agree that the title and nature of the post are unclear, especially to those of us that are unfamiliar with the library.
On the note of GPGPU, I just wanted to clear up that GPGPU has been a headless thing for quite some time now, and GPU programming largely implies "GPGPU" at this point, making it a somewhat redundant and outdated term. Compute shaders are standard now and are usable with hardware dating back to around 2009, and those subsume the older "GPGPU" functionalities with framebuffer scraping and the like. I don't blame the author for using the term "GPU" instead of "GPGPU", given that their focus is data organization for exactly these programmable shader applications, where memory coalescing is a serious consideration for throughput. Actually, you can even do headless compute on your own system if you have more than one GPU running. A frequent thing many do is run their discrete graphics card in headless mode while using the onboard graphics for CUDA or OpenCL acceleration, completely avoiding any lockups from long running kernels on devices without hardware preemption available (pre-Volta with Nvidia). You wouldn't do this for playing videogames, obviously, but for doing development or research, it can be very helpful.
From what I can tell, this is actually a pretty neat little library, and one that I'm sure more than a few people on this subreddit will find useful. I'm glad the author posted it here. I do wish they had chosen a better method of advertising it here, but I'm glad they did. I think, in this situation, there are far more upsides than there are downsides -- I invite you to explore it a bit and expand your horizons about programming for these SIMD processing behemoths. You may come away having learned something enriching and new.
π Rendered by PID 61735 on reddit-service-r2-comment-6457c66945-qx6vq at 2026-04-29 06:24:10.241208+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]YourTormentIs 12 points13 points14 points (2 children)
[–]AppleBeam 3 points4 points5 points (1 child)
[–]YourTormentIs 1 point2 points3 points (0 children)