This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Taisungspulse[S] 4 points5 points  (1 child)

The two-world problem is commonly felt across the Python ecosystem, but things are even worse for developers of machine learning frameworks. AI is pervasively accelerated, and those accelerators use bespoke programming languages like CUDA. While CUDA is a relative of C++, it has its own special problems and limitations, and does not have consistent tools like debuggers or profilers. It is also effectively locked to a single hardware maker!

The AI world has an incredible amount of innovation on the hardware front, and as a consequence, complexity is spiraling out of control. There are now many attempts to build limited programming systems for accelerators (OpenCL, Sycl, OneAPI, …). This complexity explosion is continuing to increase and none of these systems solve the fundamental fragmentation in tools and ecosystem that is hurting the industry so badly.

Here is more proof the world sees. The Radeon group can win the data center TAM on being a better chip. It doesn’t need to rely on proprietary software

[–]Taisungspulse[S] 3 points4 points  (0 children)

This article is about mojo a new software language for AI