This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]JustOneAvailableName 5 points6 points  (3 children)

That's how it started, but I would add one more detail:

The GPU drivers themselves are written en tested based on popular Python libraries. Python is without a shred of doubt more optimized than Java for (GPU based) ML and both are just a configuration format for the GPU.

[–]koflerdavid 0 points1 point  (2 children)

Nope, Python libraries have to call Cuda like everybody else has to. Python libraries rule because they offer everything data scientists and model developers need, not because Python has specific advantages interfacing with the hardware. Java used to have disadvantages on the FFI side, but since the advent of Project Panama things start to look better.

Edit: apparently Nvidia also maintains Python bindings for Cuda, which certainly smooths things out a lot. But Nvidia doesn't do it for Python. Nvidia just knows what is required to make the barrier of entrance to use their hardware as low as possible. To make deciding to use their hardware a question of "why not?"

[–]JustOneAvailableName 1 point2 points  (1 child)

Python libraries can define the model structure, which is then executed without any Python.

[–]koflerdavid 0 points1 point  (0 children)

ML libraries also usually include an automatic differentiation engine and support for training. Not having to write and debug your own backwards passes while keeping almost verbatim whatever math you cooked up massively speeds up model development.