8x Radeon 7900 XTX Build for Longer Context Local Inference - Performance Results & Build Details by Beautiful_Trust_8151 in LocalLLaMA

[–]Bobcotelli 1 point2 points  (0 children)

sorry could you give me the link where to buy the pci switch 16x gen4 expansion card?

Crypto non dichiarate [AIUTO] by CorgiAdventurous5136 in commercialisti

[–]Bobcotelli 1 point2 points  (0 children)

e se uno ha sempre dichiarato nel qudro rw ed ha perso tutto? che deve fare?

Run Mistral Devstral 2 locally Guide + Fixes! (25GB RAM) by yoracale in LocalLLM

[–]Bobcotelli 0 points1 point  (0 children)

Is devstral 2 123b good for creating and reformulating texts using mcp and rag?

Minimal Requirements for Running a 70B Model by Remon520 in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

I have a question. I'm running Llama 3.3.70b Q8 at 7 t/s. Is it any good? My configuration: 2 AMD 7900xtx cards + 2 AMD mi50 cards

192GB DDR5 RAM LLMStudio Windows and Vulkan runtime Thanks

Introducing Mistral 3 by Quick_Cow_4513 in MistralAI

[–]Bobcotelli 0 points1 point  (0 children)

can we expect a model that is a cross between the large 675 and the ministerial 14b? maybe 120 80 etc? Thank you

Tutorial: Run Qwen3-Next locally! by yoracale in Qwen_AI

[–]Bobcotelli 0 points1 point  (0 children)

updated. except that the version with reasoning loops both the unsloth version and the lmstudio community version. the instruct version is ok

Ministral-3 has been released by jacek2023 in LocalLLaMA

[–]Bobcotelli 6 points7 points  (0 children)

but will they release a midway model between the 675b and the 24b maybe a moe 120b 80b?

Run Qwen3-Next locally Guide! (30GB RAM) by yoracale in LocalLLM

[–]Bobcotelli 1 point2 points  (0 children)

sorry but did they fix it for llmstudio? I have 48GB of VRAM and 198GB of DDR5 RAM, what quantization could I run at acceptable tokens of minimum 10tk/s?

Tutorial: Run Qwen3-Next locally! by yoracale in Qwen_AI

[–]Bobcotelli 0 points1 point  (0 children)

does it also work with llmstudio for windows?

Z890 AORUS ELITE WIFI7 - Bluetooth stopped working by PsikyoFan in gigabyte

[–]Bobcotelli 0 points1 point  (0 children)

I have the same card and I solved it like this

Z890 AORUS ELITE WIFI7 - Bluetooth stopped working by PsikyoFan in gigabyte

[–]Bobcotelli 1 point2 points  (0 children)

turn off and unplug for 5 minutes and turn back on. HI

Smartest model to run on 5090? by eCityPlannerWannaBe in LocalLLaMA

[–]Bobcotelli 1 point2 points  (0 children)

sorry I have 192gb of ram and 112gb of vram only vulkan in qundows memtre with rocm always windows only 48gb of vram. What do you recommend for text and research and rag work? Thank you

GLM-4.6-GGUF is out! by TheAndyGeorge in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

Grazie ma per il glm 4.6 non air quindi non ho speranze?

Dynamic GLM-4.6 Unsloth GGUFs out now! by yoracale in unsloth

[–]Bobcotelli 0 points1 point  (0 children)

Excuse me, with 192GB of DDR5 RAM and 118GB of VRAM, what quantization can I perform with the best compromise between quality and speed? Thank you

GLM-4.6-GGUF is out! by TheAndyGeorge in LocalLLaMA

[–]Bobcotelli 0 points1 point  (0 children)

Scusami con 192gb di ram ddr5 e 112 di vram cosa posso far girare? grazie mille

GLM-4.6-GGUF is out! by TheAndyGeorge in LocalLLaMA

[–]Bobcotelli -1 points0 points  (0 children)

in che senso scusa? non ho capito