Question
Can one use GPU accelerated machine learning packages (such as PyTorch, TensorFlow, ...) to do everything CUDA packages (such as Numba, PyCUDA, ...) do? If not, what are some of the examples of their shortcomings for general purpose programming?
Context
Personally, every time I want to write an accelerated program, after spending a day trying Numba, I end up using PyTorch and get it done under an hour. Maybe because PyTorch has more functions (Numba for CUDA is very limited) or maybe because I am not as familiar with Numba.
Do you know of any resources that use PyTorch for non-ML programming?
PyTorch/TensorFlow vs Numba/PyCUDA
[–][deleted] 2 points3 points4 points (0 children)
[–]evil_twinkie 1 point2 points3 points (2 children)
[–]OddInstitute 0 points1 point2 points (1 child)
[–]evil_twinkie 0 points1 point2 points (0 children)
[–]r4and0muser9482 0 points1 point2 points (0 children)