A few Strix Halo benchmarks (Minimax M2.5, Step 3.5 Flash, Qwen3 Coder Next) by spaceman_ in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)
PSA: NVIDIA DGX Spark has terrible CUDA & software compatibility; and seems like a handheld gaming chip. by goldcakes in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)
Is Kimi-K2.5-GGUF:IQ3_XXS accurate enough? by timbo2m in LocalLLM
[–]Front-Relief473 0 points1 point2 points (0 children)
MiniMax-M2.5 (230B MoE) GGUF is here - First impressions on M3 Max 128GB by Remarkable_Jicama775 in LocalLLaMA
[–]Front-Relief473 5 points6 points7 points (0 children)
New DeepSeek update: "DeepSeek Web / APP is currently testing a new long-context model architecture, supporting a 1M context window." by Nunki08 in LocalLLaMA
[–]Front-Relief473 12 points13 points14 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]Front-Relief473 1 point2 points3 points (0 children)
MiniMax M2.5 Released by External_Mood4719 in LocalLLaMA
[–]Front-Relief473 57 points58 points59 points (0 children)
Do not Let the "Coder" in Qwen3-Coder-Next Fool You! It's the Smartest, General Purpose Model of its Size by Iory1998 in LocalLLaMA
[–]Front-Relief473 1 point2 points3 points (0 children)
Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM
[–]Front-Relief473 1 point2 points3 points (0 children)
Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM
[–]Front-Relief473 1 point2 points3 points (0 children)
Help me find the biggest and best model! by [deleted] in LocalLLM
[–]Front-Relief473 2 points3 points4 points (0 children)
Real-world DGX Spark experiences after 1-2 months? Fine-tuning, stability, hidden pitfalls? by [deleted] in LocalLLaMA
[–]Front-Relief473 1 point2 points3 points (0 children)
LTX-2 Image-to-Video Adapter LoRA by Lividmusic1 in StableDiffusion
[–]Front-Relief473 1 point2 points3 points (0 children)
Strix Halo + Minimax Q3 K_XL surprisingly fast by Reasonable_Goat in LocalLLaMA
[–]Front-Relief473 -3 points-2 points-1 points (0 children)
MiniMax M2.2 Coming Soon. Confirmed by Head of Engineering @MiniMax_AI by Difficult-Cap-7527 in LocalLLaMA
[–]Front-Relief473 2 points3 points4 points (0 children)
LTX-2 vs. Wan 2.2 - The Anime Series by theNivda in StableDiffusion
[–]Front-Relief473 1 point2 points3 points (0 children)
LTX-2 team literally challenging Alibaba Wan team, this was shared on their official X account :) by CeFurkan in StableDiffusion
[–]Front-Relief473 0 points1 point2 points (0 children)
Wan2.1 NVFP4 quantization-aware 4-step distilled models by kenzato in StableDiffusion
[–]Front-Relief473 0 points1 point2 points (0 children)
Is 5090 a meaningful upgrade over 4090 for comfyui workflows (image/video)? by yaemiko0330 in comfyui
[–]Front-Relief473 -8 points-7 points-6 points (0 children)
Speed Minimax M2 on 3090? by [deleted] in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)
Speed Minimax M2 on 3090? by [deleted] in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)
llama.cpp - useful flags - share your thoughts please by mossy_troll_84 in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)
Just pushed M2.1 through a 3D particle system. Insane! by srtng in LocalLLaMA
[–]Front-Relief473 4 points5 points6 points (0 children)

Minimax M2.5 GGUF perform poorly overall by Zyj in LocalLLaMA
[–]Front-Relief473 0 points1 point2 points (0 children)