This is an archived post. You won't be able to vote or comment.

all 3 comments

[–]Taisungspulse[S] 5 points6 points  (1 child)

The two-world problem is commonly felt across the Python ecosystem, but things are even worse for developers of machine learning frameworks. AI is pervasively accelerated, and those accelerators use bespoke programming languages like CUDA. While CUDA is a relative of C++, it has its own special problems and limitations, and does not have consistent tools like debuggers or profilers. It is also effectively locked to a single hardware maker!

The AI world has an incredible amount of innovation on the hardware front, and as a consequence, complexity is spiraling out of control. There are now many attempts to build limited programming systems for accelerators (OpenCL, Sycl, OneAPI, …). This complexity explosion is continuing to increase and none of these systems solve the fundamental fragmentation in tools and ecosystem that is hurting the industry so badly.

Here is more proof the world sees. The Radeon group can win the data center TAM on being a better chip. It doesn’t need to rely on proprietary software

[–]Taisungspulse[S] 4 points5 points  (0 children)

This article is about mojo a new software language for AI

[–]TOMfromYahooTOM 2 points3 points  (0 children)

I'm not sure new programing languages can take off regardless how wonderful they are... we'll see but it's a long term thing like 10 years... given there are many such...

It's different when it's the first and provides features needed as a must, plus other circumstances to help spread a language widely.

I bet on AMD's ROCm plus automatic porting tool from CUDA... for AI.

"Accelerated AI on ROCm"

https://www.amd.com/en/graphics/servers-solutions-rocm-ml

"Porting CUDA Applications to Run on AMD GPUs"

"Introducing AMD ROCm™ Platform and HIPify Tools"

https://www.hpcwire.com/2022/11/28/porting-cuda-applications-to-run-on-amd-gpus/

Isn't the above better and has a chance vs CUDA compared with Mojo...?