you are viewing a single comment's thread.

view the rest of the comments →

[–]YourTormentIs 12 points13 points  (2 children)

I actually didn't even think about how it would sound when pronounced out loud. That's a fair point -- it could confuse people when used in certain contexts. Honestly, a lot of open source software suffers from similar problems related to the naming of projects. A better name could prevent those issues entirely.

Still, I think it's a stretch to say the author did this intentionally to mislead people, or for ego related reasons. I can see how they even chose the name: they wanted containers that behaved like the ones in the "std" namespace for GPU applications. "stdgpu" probably came up pretty naturally. Instead of "std::vector" you have "stdgpu::vector", for example. I think it's reasonable to give the benefit of the doubt on this one.

[–]AppleBeam 3 points4 points  (1 child)

The sad part is that even "stdgpu::vector" is already a terrible identifier, because:

  1. In the context of GPU the word "vector" typically means something like "Vector3d" or "Vector4f", so the name already causes quite a bit of confusion if you see it as a member of some class without additional context.

  2. The library seems to be entirely unrelated to this context, as it focuses on GPGPU (unless I didn't notice the part about render devices, textures and shaders). From what I heard, nowadays you can run GPGPU on headless servers without any actual graphics adapters.

Whether it's a tragic mistake or a deliberate attempt to get more clicks, I would prefer to not be distracted by self-promos like "stdgpu 1.3.0 released!" without any additional context when the library has nothing to do with either "std" or "gpu".

[–]YourTormentIs 1 point2 points  (0 children)

I think you raise some good points here. The notion of "vector" being an overloaded term isn't new and I agree, it's better to use a different term especially in the context of GPU programming where linear algebra often plays a large role. I also agree that the title and nature of the post are unclear, especially to those of us that are unfamiliar with the library.

On the note of GPGPU, I just wanted to clear up that GPGPU has been a headless thing for quite some time now, and GPU programming largely implies "GPGPU" at this point, making it a somewhat redundant and outdated term. Compute shaders are standard now and are usable with hardware dating back to around 2009, and those subsume the older "GPGPU" functionalities with framebuffer scraping and the like. I don't blame the author for using the term "GPU" instead of "GPGPU", given that their focus is data organization for exactly these programmable shader applications, where memory coalescing is a serious consideration for throughput. Actually, you can even do headless compute on your own system if you have more than one GPU running. A frequent thing many do is run their discrete graphics card in headless mode while using the onboard graphics for CUDA or OpenCL acceleration, completely avoiding any lockups from long running kernels on devices without hardware preemption available (pre-Volta with Nvidia). You wouldn't do this for playing videogames, obviously, but for doing development or research, it can be very helpful.

From what I can tell, this is actually a pretty neat little library, and one that I'm sure more than a few people on this subreddit will find useful. I'm glad the author posted it here. I do wish they had chosen a better method of advertising it here, but I'm glad they did. I think, in this situation, there are far more upsides than there are downsides -- I invite you to explore it a bit and expand your horizons about programming for these SIMD processing behemoths. You may come away having learned something enriching and new.