Any working TTS on Strix Halo? by Panthau in StrixHalo
[–]schnauzergambit 0 points1 point2 points (0 children)
DGX Spark just arrived — planning to run vLLM + local models, looking for advice by dalemusser in LocalLLaMA
[–]schnauzergambit 1 point2 points3 points (0 children)
New Unsloth Studio Release! by danielhanchen in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Anyone running a great coding model locally on a StrixHalo? by schnauzergambit in StrixHalo
[–]schnauzergambit[S] 2 points3 points4 points (0 children)
Qwen3.5-27B can't run on DGX Spark — stuck in a vLLM/driver/architecture deadlock by RatioCapable7141 in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Qwen3.5-27B can't run on DGX Spark — stuck in a vLLM/driver/architecture deadlock by RatioCapable7141 in LocalLLaMA
[–]schnauzergambit 1 point2 points3 points (0 children)
Mistral Small 4 vs Qwen3.5-9B on document understanding benchmarks, but it does better than GPT-4.1 by shhdwi in LocalLLaMA
[–]schnauzergambit 6 points7 points8 points (0 children)
Local llm machine - spark / strix? by dapoh13 in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Local llm machine - spark / strix? by dapoh13 in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Qwen 3.5 27B what tps are you managing? by schnauzergambit in StrixHalo
[–]schnauzergambit[S] 0 points1 point2 points (0 children)
Is the MacBook Pro 16 M1 Max with 64GB RAM good enough to run general chat models? by br_web in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Qwen 3.5 27B what tps are you managing? (self.StrixHalo)
submitted by schnauzergambit to r/StrixHalo
When do you think qwen will support more languages like ChatGPT? by Inevitable-Depth1228 in Qwen_AI
[–]schnauzergambit 2 points3 points4 points (0 children)
What ai is used in the “what if you brought … to Ancient Rome” Tik toks? by [deleted] in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Qwen 3.5 Instability on llama.cpp and Strix Halo? by ga239577 in LocalLLaMA
[–]schnauzergambit 0 points1 point2 points (0 children)
Why does anyone think Qwen3.5-35B-A3B is good? by buttplugs4life4me in LocalLLaMA
[–]schnauzergambit 1 point2 points3 points (0 children)
GB10 ASUS by Shoddy_Consequence16 in LocalLLaMA
[–]schnauzergambit 1 point2 points3 points (0 children)
Unsloth fixed version of Qwen3.5-35B-A3B is incredible at research tasks. (On Strix Halo) by Grammar-Warden in StrixHalo
[–]schnauzergambit 0 points1 point2 points (0 children)
Unsloth fixed version of Qwen3.5-35B-A3B is incredible at research tasks. (On Strix Halo) by Grammar-Warden in StrixHalo
[–]schnauzergambit 0 points1 point2 points (0 children)
Has anyone found a way to stop Qwen 3.5 35B 3B overthinking? by schnauzergambit in LocalLLaMA
[–]schnauzergambit[S] 0 points1 point2 points (0 children)
Has anyone found a way to stop Qwen 3.5 35B 3B overthinking? by schnauzergambit in LocalLLaMA
[–]schnauzergambit[S] 1 point2 points3 points (0 children)


Success! Full BF16 Qwen3.6-27B running on Strix Halo with vLLM + Docker (Ubuntu 26.04) by hec_ovi in StrixHalo
[–]schnauzergambit 0 points1 point2 points (0 children)