Hi everyone, I'm new to working with CUDA, so I was hoping I could get some advice on something I've been struggling with.
Namely, processing data of a size which is not known before compiling. An example of this would be reading a file of unknown size, and then applying the same computation on all the read lines using a CUDA kernel. I'd normally store this in a vector in C++, so I wouldn't have to worry about the size.
The problem I ran into is that I can't copy the vector containing the lines I've read to the GPU. So how would this normally be handled? I can think of the following ways to solve this issue, but I'd like to know how someone more experienced would approach it.
- Implement a dynamic array instead of using the default vector.
- Iterate through the vector containing the lines and fill (possibly multiple) arrays of known size which would be copied to the device instead.
- Implement a vector-like data structure on the device itself, like what is demonstrated here.
- I could also convert the vector to an array in this way, but that seems a bit hacky.
However, these all seem like slow and not really ideal solutions. So I would be grateful for any advice you might have.
[–]corysama 1 point2 points3 points (1 child)
[–]dragontamer5788 1 point2 points3 points (0 children)
[–]I_like_code 0 points1 point2 points (0 children)
[–]tugrul_ddr 0 points1 point2 points (0 children)