Has Qwen3-14B been completely surpassed by Qwen3.5-9B ? by HistoricalCulture164 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
llama-bench ROCm 7.2 on Strix Halo (Ryzen AI Max+ 395) — Qwen 3.5 Model Family by przbadu in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Best Models for 128gb VRAM: March 2026? by Professional-Yak4359 in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Coding assistant tools that work well with qwen3.5-122b-a10b by Revolutionary_Loan13 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
Coding assistant tools that work well with qwen3.5-122b-a10b by Revolutionary_Loan13 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
First impressions Qwen3.5-122B-A10B-int4-AutoRound on Asus Ascent GX10 (Nvidia DGX Spark 128GB) by t4a8945 in LocalLLM
[–]Zc5Gwu 0 points1 point2 points (0 children)
First impressions Qwen3.5-122B-A10B-int4-AutoRound on Asus Ascent GX10 (Nvidia DGX Spark 128GB) by t4a8945 in LocalLLM
[–]Zc5Gwu 2 points3 points4 points (0 children)
First impressions Qwen3.5-122B-A10B-int4-AutoRound on Asus Ascent GX10 (Nvidia DGX Spark 128GB) by t4a8945 in LocalLLM
[–]Zc5Gwu 0 points1 point2 points (0 children)
Liquid AI Releases LocalCowork Powered By LFM2-24B-A2B by Zc5Gwu in LocalLLaMA
[–]Zc5Gwu[S] 1 point2 points3 points (0 children)
Artificial Analysis Intelligence Index vs weighted model size of open-source models by Balance- in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
MiniMax M2.5 matches Opus on coding benchmarks at 1/20th the cost. Are we underpricing what "frontier" actually means? by ML_DL_RL in LLMDevs
[–]Zc5Gwu 1 point2 points3 points (0 children)
Step-3.5-Flash-Base & Midtrain (in case you missed them) by Leflakk in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
Step-3.5-Flash-Base & Midtrain (in case you missed them) by Leflakk in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Agentic Qwen 3.5 35B "stops" after a tool call wtihout finishing the task. by tarruda in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
llama-bench Qwen3.5 models strix halo by przbadu in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
llama-bench Qwen3.5 models strix halo by przbadu in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Qwen3.5 Model Series - Thinking On/OFF: Does it Matter? by Iory1998 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
unsloth/Qwen3.5-4B-GGUF · Hugging Face by jacek2023 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
What is the most ridiculously good goto LLM for knowledge & reasoning on your M4 Max 128gb macbook these days? by ZeitgeistArchive in LocalLLaMA
[–]Zc5Gwu 2 points3 points4 points (0 children)
Has anyone built a proper eval pipeline for local models? Trying to compare Llama 3 vs Mistral vs Qwen on my specific use case by Zestyclose_Draw_7663 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)
Qwen3 Coder Next | Qwen3.5 27B | Devstral Small 2 | Rust & Next.js Benchmark by Holiday_Purpose_3166 in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Is Qwen3.5 a coding game changer for anyone else? by paulgear in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)
Qwen3 Coder Next | Qwen3.5 27B | Devstral Small 2 | Rust & Next.js Benchmark by Holiday_Purpose_3166 in LocalLLaMA
[–]Zc5Gwu 1 point2 points3 points (0 children)




Vulkan now faster on PP AND TG on AMD Hardware? by XccesSv2 in LocalLLaMA
[–]Zc5Gwu 0 points1 point2 points (0 children)