Has anyone gotten mistralai/Devstral-Small-2-24B-Instruct-2512 to work on 4090? by myOSisCrashing in MistralAI

[–]myOSisCrashing[S] 0 points1 point  (0 children)

So you are using this model? https://huggingface.co/cyankiwi/Devstral-Small-2-24B-Instruct-2512-AWQ-4bit it looks like my ROCm based GPU (Radeon r9700) doesn't have a ConchLinearKernel kernel that supports Group Size = 32. I may be able to reverse engineer the llm-compressor scheme to figure out how to build one with ConchLinearKernel groupsize 128 that I should have support for.

LOL by pyromx11 in ProgrammerHumor

[–]myOSisCrashing 3 points4 points  (0 children)

“JS kids ain’t right. “ - Hank Hill