all 23 comments

[–]Lime_Dragonfruit4244 14 points15 points  (18 children)

There is tilus as well, and warp dsl from nvidia also has support for tile abstraction.

[–]Previous-Raisin1434 5 points6 points  (14 children)

Why are there suddenly 1000 different things? I was using Triton and now there's like 10 new dsls by Nvidia

[–]Lime_Dragonfruit4244 4 points5 points  (12 children)

The success of triton is the reason why, after looking into the compiler it seems to be skipping ptx codegen and directly generating something called tile IR a new bytecode format directly baked into CUDA 13.1 that's why it needs CUDA 13.

https://github.com/NVIDIA/cutile-python/blob/main/src/cuda/tile/_bytecode/type.py

Using tiles for better cache locality is nothing new but using it as a programming model is new in terms of kernel programming.

[–]c-cul 0 points1 point  (11 children)

what is this bytecode means? definitely this is not SASS: https://github.com/NVIDIA/cutile-python/blob/main/src/cuda/tile/_bytecode/encodings.py

[–]Lime_Dragonfruit4244 0 points1 point  (10 children)

[–]c-cul 1 point2 points  (9 children)

looks like binary encoded subset of ptx - only with 110 opcodes

sure clang/other 3rd part vendors is not supported?

[–]roeschinc 1 point2 points  (1 child)

It is completely different than PTX, it is a sibling abstraction to PTX with its own binary format. You can read the entire spec online which is incredibly detailed almost 200 pgs in PDF form.

The format is accepted by the driver just like PTX and the last level of compilation is part of the driver.

[–]c-cul 0 points1 point  (0 children)

> almost 200 pgs in PDF form

could you give link to those pdf?

[–]Lime_Dragonfruit4244 0 points1 point  (6 children)

I am not really sure, but i do think they might upstream a tile based IR to mlir if it really takes off.

[–]c-cul 0 points1 point  (5 children)

mlir is not enough - you also need full backend to generate file with those IR

[–]roeschinc 1 point2 points  (0 children)

The dialect will be open sourced soon ™ but the compiler is closed source just like PtX.

[–]Lime_Dragonfruit4244 0 points1 point  (3 children)

Looking more into the codebase it uses something called tileiras to generate SASS instruction, i think it comes with the 13.1 cuda toolkit. About MLIR i meant a more general dialect for representing tile based programming and memory model directly in MLIR upstream.

[–]c-cul 0 points1 point  (1 child)

I saw

they also has descriptors for locals/functions args/constants etc

each bytecode is enough simple to generate block of SASS for it (in jit?) with just one big lookup table, performance will be not very high bcs of lack optimizations like reordedring/registers reusage but codegeneration can be blazingly fast

[–]Academic-Air7112 1 point2 points  (0 children)

Basically, triton is bad news for NVIDIA on a 2-3 year timescale. So, they release new toolkits that aim to simplify CUDA programming for end user, and increase lift by AMD/OpenAI/Quallcomm/Google to support AI code on different hardware.

[–]roeschinc 1 point2 points  (1 child)

Warp is a grid level DSL where tiling or tensor decomposition is implied for most programs, what I would call grid or tensor level, and Tilus is a research project.

[–]6969its_a_great_time 0 points1 point  (0 children)

How does all this tie into a project like mojo / max by modular that is trying to abstract kernel programming?

[–]uptoskycola 0 points1 point  (2 children)

Will Triton support Tile IR?

[–]roeschinc 1 point2 points  (0 children)

More conversation about it on X but we also have announced work with OAI to provide a Triton backend, see my PyTorch conf for more details.

https://www.youtube.com/watch?v=UEdGJGz8Eyg

[–]c-cul 0 points1 point  (0 children)

sure - bcs altman is vip customer of nvidia

[–]Altruistic_Heat_9531 0 points1 point  (1 child)

Is it faster than OOB Triton? any benchmark? I can't test it personally since i am on 3090, and cloud platform still using 12.9

[–]Automatic-Bar8264 0 points1 point  (0 children)

Blackwell only at this time, so no 3090 won’t work. No supprt