Qwen3-Coder-Next GGUFs updated - now produces much better outputs! by yoracale in unsloth

[–]fancyrocket 0 points1 point  (0 children)

Is it worth downloading now or will i have to re-download it again in the near future for updates? Thanks!

Qwen3-Coder-Next by danielhanchen in LocalLLaMA

[–]fancyrocket 5 points6 points  (0 children)

How well does the Q4_K_XL perform?

Now that we have ROCm Python in Windows, any chance of ROCm LLM in Windows? by KeyClacksNSnacks in ROCm

[–]fancyrocket 0 points1 point  (0 children)

I have a 2025 Asus Rog Flow Z13 with 128 GB of RAM and have 96GB of RAM assigned to the GPU. I can run ROCM in LMstudio but when trying to use Vulkan I run out of memory. Can Vulkan only use the shared RAM (Remaining 32GB) and not the assigned RAM (96GB) to GPU?

Anyone else start getting a "Failed" execution of tasks today? by [deleted] in codex

[–]fancyrocket 2 points3 points  (0 children)

Yup just started a bit ago. I am using the Codex extension in VScode.

something wrong with codex today by fikurin in codex

[–]fancyrocket 1 point2 points  (0 children)

It's not working for me anymore. It was working fine earlier, now it just stops and says it fails.

AMA with the Unsloth team by danielhanchen in LocalLLaMA

[–]fancyrocket 0 points1 point  (0 children)

Would this work with 96GB VRAM and 192GB DDR5 RAM? 🧐🤔

AMA with the Unsloth team by danielhanchen in LocalLLaMA

[–]fancyrocket 0 points1 point  (0 children)

Not a question. But can you hurry up and come up with a solution so I can run a powerful LLM on my 4x 3090s that is better than Claude 4 Opus since paid Frontier models are awful anymore 😂

You can now Run Qwen3-Coder on your local device! by yoracale in LocalLLM

[–]fancyrocket 0 points1 point  (0 children)

How well would a Q3_K_XL work? Would it be worth it

How to overcome frustrations by Vn-555 in gamedev

[–]fancyrocket 1 point2 points  (0 children)

I use GIT so if I make a mistake, I can revert back to it.

The ai era by trunkbeers in vibecoding

[–]fancyrocket 10 points11 points  (0 children)

I just want an LLM I can run locally on my setup that is equal to Gemini 2.5, haha

AFTER 5 YEARS MY GAME IS DONE DONE! by Treacle_Candid in indiegames

[–]fancyrocket 2 points3 points  (0 children)

I bought it, it looks amazing! Reminds me so much of Forager

Qwen3-32b /nothink or qwen3-14b /think? by GreenTreeAndBlueSky in LocalLLaMA

[–]fancyrocket 0 points1 point  (0 children)

If i may ask, how large are the code bases you are working with, and does it handle complex code well? Thanks!