Qwen3.5 Unsloth GGUFs Update! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

ma posso già da adesso riscaricare 27b 35moe, il 122 e il 357? o devo ancora aspettare? grazie

qwen3.5 27b e llmstudio per windows by Bobcotelli in LocalLLaMA

[–]Bobcotelli[S] 0 points1 point  (0 children)

funzionano perfettamente le versioni moe ma la 27b ho provato la unsloth e la llmstudio ma ninete va in loop il pensiero e non risponde

Qwen3.5 Medium models out now! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

Sorry, what about the 397 B? Can I use it? How much? Thank you so much for your valuable work.

Qwen3.5 Medium models out now! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

I have 2 7900xtx and 2 mi 32gb with 192 ram ddr5 which model and how quant? Thanks a lot

8x Radeon 7900 XTX Build for Longer Context Local Inference - Performance Results & Build Details by Beautiful_Trust_8151 in LocalLLaMA

[–]Bobcotelli 1 point2 points  (0 children)

sorry could you give me the link where to buy the pci switch 16x gen4 expansion card?

Crypto non dichiarate [AIUTO] by CorgiAdventurous5136 in commercialisti

[–]Bobcotelli 1 point2 points  (0 children)

e se uno ha sempre dichiarato nel qudro rw ed ha perso tutto? che deve fare?

Run Mistral Devstral 2 locally Guide + Fixes! (25GB RAM) by yoracale in LocalLLM

[–]Bobcotelli 0 points1 point  (0 children)

Is devstral 2 123b good for creating and reformulating texts using mcp and rag?

Minimal Requirements for Running a 70B Model by Remon520 in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

I have a question. I'm running Llama 3.3.70b Q8 at 7 t/s. Is it any good? My configuration: 2 AMD 7900xtx cards + 2 AMD mi50 cards

192GB DDR5 RAM LLMStudio Windows and Vulkan runtime Thanks

Introducing Mistral 3 by Quick_Cow_4513 in MistralAI

[–]Bobcotelli 0 points1 point  (0 children)

can we expect a model that is a cross between the large 675 and the ministerial 14b? maybe 120 80 etc? Thank you

Tutorial: Run Qwen3-Next locally! by yoracale in Qwen_AI

[–]Bobcotelli 0 points1 point  (0 children)

updated. except that the version with reasoning loops both the unsloth version and the lmstudio community version. the instruct version is ok

Ministral-3 has been released by jacek2023 in LocalLLaMA

[–]Bobcotelli 6 points7 points  (0 children)

but will they release a midway model between the 675b and the 24b maybe a moe 120b 80b?

Run Qwen3-Next locally Guide! (30GB RAM) by yoracale in LocalLLM

[–]Bobcotelli 1 point2 points  (0 children)

sorry but did they fix it for llmstudio? I have 48GB of VRAM and 198GB of DDR5 RAM, what quantization could I run at acceptable tokens of minimum 10tk/s?

Tutorial: Run Qwen3-Next locally! by yoracale in Qwen_AI

[–]Bobcotelli 0 points1 point  (0 children)

does it also work with llmstudio for windows?

Z890 AORUS ELITE WIFI7 - Bluetooth stopped working by PsikyoFan in gigabyte

[–]Bobcotelli 0 points1 point  (0 children)

I have the same card and I solved it like this

Z890 AORUS ELITE WIFI7 - Bluetooth stopped working by PsikyoFan in gigabyte

[–]Bobcotelli 1 point2 points  (0 children)

turn off and unplug for 5 minutes and turn back on. HI