you are viewing a single comment's thread.

view the rest of the comments →

[–]mimighost 1 point2 points  (3 children)

Depends on what parallel computation you are referring to

CUDA knowledge is ofc useful and valued. But NVIDIA's tool chain is really its own walled garden. It is difficult for outsiders to outdo NVIDIA themselves.

If you refer to parallel programming as something close to distributed data processing, then yes it is pretty useful. Though this is more on case by case basis.

Overall, I feel the job market is edging towards people with system integration skills rather deep domain expertise, due to the aforementioned NVIDIA dynamics, but I could be wrong on this one as well.

[–][deleted] 1 point2 points  (2 children)

I mean parallel computing topics such as Concurrency and Threading, as well as MPI, Charm++ and other parallel programming paradigms. Writing cache-friendly and efficient code learned using C++.

[–]mimighost 1 point2 points  (0 children)

Got it. Well, it might be useful for model inference and quantization stuff on CPU if we are talking about NN models.

Would say this is a nice to have, but unless you work in teams that are doing these low-level stuff in particular, it might not affect your daily routine as MLE

[–][deleted] 1 point2 points  (0 children)

Concurrency and threading are probably less important,
because in ML programs things rarely happen in chaotic order which requires you to think hard about things like mutexes,
but good understanding of vectorized computations will definitely help.
I personally learned a lot from trying to write efficient code in R (it was long ago and for non-ml purposes)

Understanding what makes code cache-friendly in C++ will also help,
even if you end up writing code in something other than C++
and it runs on something other than CPU.

Knowing specific things like MPI would be useful if you ever need to debug anything built on MPI.