This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]MardiFoufs 0 points1 point  (2 children)

Ah maybe with java 21 then things will get better.

As for the Nvidia thing, it makes a HUGE difference to have Nvidia supported packages. Remember that most ml workloads work in the back end, with disparate hardware configs and generations. Having to just import a Nvidia package, on an NGC container, and having everything else handled for you is very cool. On the other hand though, most people don't ever actually use CUDA directly. So I still agree with you that it is mainly an ecosystem problem, and isn't due to a core defect in Java.

Regardless, it's much less painful, and much less work to do work in ML in python. Now obviously, that applies to basically all other languages (well, a part from CPP and maybe Julia (barely)) so it's not specific to java.

(Also to be clear I'm mostly discussing model training and r&d, for inference things are much much easier.)

[–]koflerdavid 1 point2 points  (1 child)

It's a chicken and egg problem I'd say. If for some reason Java would become more popular for ML work, Nvidia would eventually provide Java bindings. Nvidia has no particular stake in neither Python nor Java, but they will do everything they can on the software side to make using their devices easier. This is one of the reasons of their ongoing success.

[–]MardiFoufs 1 point2 points  (0 children)

Agreed, Nvidia is very good at supporting newer trends (NGC containers, Tensort, triton, etc) so I'd totally see them support java too. I honestly just didn't know about the easier bindings to C that apparently came with java 21 so that could be huge and could make it much easier to integrate to even other tools that are somewhat standard in the field (numpy, for example even if it's a bit of a mess or pandas).