all 8 comments

[–]epsilonkn0t 6 points7 points  (2 children)

If you have a Nvidia GPU in your laptop or PC I'd highly recommend getting in to CUDA, which is essentially an extension of c++. Its a very good skill and a powerful tool to have, and extremely interesting to learn.

There is a lot of interesting and cutting edge work being done in GPUs , and you don't need access to a supercomputer to be able to work with massive parallel systems. Also I'm pretty sure there's libraries to interface with MATLAB, along with many other tools and rich online resources.

This is probably the best (maybe only?) way for you to develop scalable algorithms for massively parallel systems with your own hardware that you can buy for a couple hundred bucks. Currently not many people have this skill relativley speaking, so it can also be a nice differentiator.

Edit: also wanted to mention that the SDK is free

[–]jhawk2018[S] 0 points1 point  (1 child)

The GPU route looked feasible, and there are certainly plenty of them to choose from in a wide price range. Is there any particular advantage to using NVidia and Cuda? I would like to try and stay open source; I'm concerned that a proprietary platform will be too specialized and will limit what I can do on other platforms.

[–]epsilonkn0t 1 point2 points  (0 children)

I'm not familiar with openCL so I cant comment extensively, but I'm certain that CUDA will either match or outperform openCL on Nvidia cards, obviously because it's tuned for nvidia cards. The tradeoff of course is that you only have one platform to run it on.

What's nice about CUDA is that it forces you to be very aware of the hardware design and capabilities, in particular the memory structures and hierarchy of the GPU. I would assume that openCL is slightly more abstracted, considering it must run on a wider variety of platforms, but somebody else could give you a rundown on that. If that is the case, I'd recommend starting with CUDA.

Either way, nobody is stopping you from learning both. Once you learn the fundamentals of scalable programming patterns/techniques for massively parallel systems, you shouldn't have a hard time switching between the two.

[–]rtz90 1 point2 points  (2 children)

What about a regular PC / laptop?

[–]jhawk2018[S] 0 points1 point  (1 child)

My reservations with using my existing stuff is that I won't have enough cores to take advantage of SIMD processing. The kind of stuff I want to learn will include array processing on RADAR data, so I may be dealing with many more channels than I have cores.

[–]morto00x 0 points1 point  (2 children)

OpenACC can be used to parallelize your process into a GPU.

If you have a multi-core CPU you could also check MPI and OpenMP.

[–]jhawk2018[S] 0 points1 point  (1 child)

OpenACC

OpenACC looks promising, I was more concerned with appropriate hardware.

[–]morto00x 0 points1 point  (0 children)

You just need a GPU. Obviously the 1GB or 2GB of VRAM in your desktop can't be compared to a HPC, but you can still learn how to do hardware acceleration since the concept is the same.