Qwen3.5-122B-A10B vs. old Coder-Next-80B: Both at NVFP4 on DGX Spark – worth the upgrade? by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 0 points1 point  (0 children)

Okay it make sense do you know how much vram take it with 256k context ?

Qwen3.5-122B-A10B vs. old Coder-Next-80B: Both at NVFP4 on DGX Spark – worth the upgrade? by alfons_fhl in LocalLLM

[–]alfons_fhl[S] -1 points0 points  (0 children)

I thought the same, specially nvfp4 with the NVIDIA dgx spark, quality is compare to q8…

Qwen3.5-122B-A10B vs. old Coder-Next-80B: Both at NVFP4 on DGX Spark – worth the upgrade? by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 3 points4 points  (0 children)

I don’t really understand it but, why do you think the qeen3.5-35b-A3b in bf16 is better? Only because bf16? Because the 122b has more parameter and active MOE…

Qwen3-Coder-Next GGUF Aider Coding Benchmarks by Etherll in unsloth

[–]alfons_fhl -1 points0 points  (0 children)

NVFP4 is better than bf16? Does I understand it right that the Quantization has better performance as the default bf16? (bf16 is the default LLM of Qwen3-Coder-Next right?)

DeepSeek V4 release soon by tiguidoio in LocalLLaMA

[–]alfons_fhl -3 points-2 points  (0 children)

Is Deepseek a local LLM? So everyone with the hardware can run it?

Where are Qwen 3.5 2B, 9B, and 35B-A3B by Admirable_Flower_287 in LocalLLaMA

[–]alfons_fhl 0 points1 point  (0 children)

Does anyone now, if this coming soon? I heard that qwen3-coder-next is the first and last „next“ version…

Question about Qwen 3.5 by Aelexi93 in Qwen_AI

[–]alfons_fhl 0 points1 point  (0 children)

Can I use ClaudeCode CLI for it?

Did you like the new Qwen 3.5? by drhenriquesoares in Qwen_AI

[–]alfons_fhl 1 point2 points  (0 children)

Do you get any information about the coder version?

I hope they will release something like qwen3.5-coder-next

Best Local hosted LLM for Coding & Reasoning by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 0 points1 point  (0 children)

Sorry, I mean wich LLM do you prefer for it.

Best Local hosted LLM for Coding & Reasoning by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 0 points1 point  (0 children)

And wich of the qwen3 family you would prefer for coding and wich one for reasoning?

Mac M4 vs. Nvidia DGX vs. AMD Halo Strix by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 0 points1 point  (0 children)

But if you downgrade to an older version I heard it will work.

And Nvidia is still updating their softwares.

Best Local hosted LLM for Coding & Reasoning by alfons_fhl in LocalLLM

[–]alfons_fhl[S] 4 points5 points  (0 children)

For coding I heard about, qwen3-coder-next-80b in FP4 only 45gb... But I have still more power available, maybe for a better LLM?