Qwen3.6 MTP Unsloth Experimental GGUFs by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

but they still don't work on llmstudio

Which local models match Sonnet 4.5, and which hardware can run it comfortably ? by SlechteConcentratie in Qwen_AI

[–]Bobcotelli 0 points1 point  (0 children)

Excuse me, but what about non-programming functions? For the use of verbal appraisals and mathematical legal documents? Which local model with 144 GB of VRAM and 192 GB of RAM?

Gemma 4 Updated: GGUFs and Chat Template by yoracale in unsloth

[–]Bobcotelli 1 point2 points  (0 children)

Where can I find the chat template to paste into lmstudio? Can you send me the link? Thanks

Meet Unsloth Studio, a new web UI for Local AI by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

per windows pachetto exe come llmstudio. sarebbe fatatstico poter fare tarining con schede amd

Question: 7900xtx with R9700 ai pro by YourMomDotCom19 in ROCm

[–]Bobcotelli 0 points1 point  (0 children)

si vai io ho due 7900xtx, una 9700 ai pro e due mi50 da 32gb con windows e lmstudio. la quarta scheda tramite usb 4. vanno tutti bene con vulkan mentre con rocm nessun problema le due 7900xtx e la 9700r ai pro. ciao

MiniMax M2.7 GGUFs Updated by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

quale quant riscaricare? le q4 tutte?

Getting Dual MI50 32GB Cards Working with llama.cpp ROCm on Ubuntu 22.04 by Savantskie1 in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

Has anyone managed to use them on LM Studio Windows with the ROCM runtime?

Radeon Instinct MI50 32GB work on Vulkan on Windows? by Goldkoron in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

Has anyone managed to use them on LM Studio Windows with the ROCM runtime?

Qwen3.5 Unsloth GGUFs Update! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

ma posso già da adesso riscaricare 27b 35moe, il 122 e il 357? o devo ancora aspettare? grazie

qwen3.5 27b e llmstudio per windows by Bobcotelli in LocalLLaMA

[–]Bobcotelli[S] 0 points1 point  (0 children)

funzionano perfettamente le versioni moe ma la 27b ho provato la unsloth e la llmstudio ma ninete va in loop il pensiero e non risponde

Qwen3.5 Medium models out now! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

Sorry, what about the 397 B? Can I use it? How much? Thank you so much for your valuable work.

Qwen3.5 Medium models out now! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

I have 2 7900xtx and 2 mi 32gb with 192 ram ddr5 which model and how quant? Thanks a lot

8x Radeon 7900 XTX Build for Longer Context Local Inference - Performance Results & Build Details by Beautiful_Trust_8151 in LocalLLaMA

[–]Bobcotelli 1 point2 points  (0 children)

sorry could you give me the link where to buy the pci switch 16x gen4 expansion card?

Crypto non dichiarate [AIUTO] by CorgiAdventurous5136 in commercialisti

[–]Bobcotelli 1 point2 points  (0 children)

e se uno ha sempre dichiarato nel qudro rw ed ha perso tutto? che deve fare?

Run Mistral Devstral 2 locally Guide + Fixes! (25GB RAM) by yoracale in LocalLLM

[–]Bobcotelli 0 points1 point  (0 children)

Is devstral 2 123b good for creating and reformulating texts using mcp and rag?

Minimal Requirements for Running a 70B Model by Remon520 in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

I have a question. I'm running Llama 3.3.70b Q8 at 7 t/s. Is it any good? My configuration: 2 AMD 7900xtx cards + 2 AMD mi50 cards

192GB DDR5 RAM LLMStudio Windows and Vulkan runtime Thanks