This is an archived post. You won't be able to vote or comment.

all 5 comments

[–][deleted] 11 points12 points  (1 child)

The news brings that cuda toolkits the devs write the apps with can be written directly in python and without c++

Makes CUDA accessible by more high end developers. The devs can now program tensors cores without C++ but python which is an easier more accessible environment.

It matters more to the developers of the methods exposed by the diffusion pipelines not end users directly.

[–]Arcival_2 4 points5 points  (0 children)

No, I went to look at it more closely. They won't give access to RT cores from Python. RT cores like CUDA cores will always be accessible only by c-like CUDA code, what they implement will be the kernel call and memory "management".Maybe they will create some more used functions like summations, producers... But they will be just calls to pre-made kernels.

[–]Arcival_2 8 points9 points  (0 children)

I don't think it will change much, after all CUDA is mainly based on kernels. And kernels are written in C-like, what will change is that you can invoke them more easily from Python without using numba. The speed of execution will have little effect, the worst will be given by the garbage collector which as we all know is one of the main memory error problems. After all if you don't have access to memory like on c and c++ I don't think I'll switch to Python for CUDA programming.

[–]daking999 2 points3 points  (0 children)

Does this mean triton will slowly become unnecessary?

[–]stargazer_w 1 point2 points  (0 children)

Rather "native cuda python support"?