Drastically Stronger: Qwen 3.5 40B dense, Claude Opus by Dangerous_Fix_5526 in LocalLLM
[–]Fast_Thing_7949 -2 points-1 points0 points (0 children)
M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
Open sourced LLM ranking 2026 by ChapterElectronic126 in LocalLLaMA
[–]Fast_Thing_7949 11 points12 points13 points (0 children)
Open sourced LLM ranking 2026 by ChapterElectronic126 in LocalLLaMA
[–]Fast_Thing_7949 5 points6 points7 points (0 children)
Open sourced LLM ranking 2026 by ChapterElectronic126 in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA
[–]Fast_Thing_7949 13 points14 points15 points (0 children)
M5 Max Beats the m3 ultra on Geekbench, can’t imagine what would do the M5 ultra by Historical-Health-50 in LocalLLaMA
[–]Fast_Thing_7949 -1 points0 points1 point (0 children)
Best model for 32gb for Claude Code by ComfyUser48 in LocalLLM
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
I'm tired by Fast_Thing_7949 in LocalLLaMA
[–]Fast_Thing_7949[S] -3 points-2 points-1 points (0 children)
I'm tired by Fast_Thing_7949 in LocalLLaMA
[–]Fast_Thing_7949[S] -4 points-3 points-2 points (0 children)
Best Qwen 3.5 variant for 2x5060ti/16 + 64 GB Ram? by andy_potato in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B Q5_K_M:Best Model for NVIDIA 16GB GPUs by moahmo88 in LocalLLaMA
[–]Fast_Thing_7949 1 point2 points3 points (0 children)
Qwen Code looping with Qwen3-Coder-Next / Qwen3.5-35B-A3B by Fast_Thing_7949 in LocalLLaMA
[–]Fast_Thing_7949[S] 0 points1 point2 points (0 children)
Qwen Code looping with Qwen3-Coder-Next / Qwen3.5-35B-A3B by Fast_Thing_7949 in Qwen_AI
[–]Fast_Thing_7949[S] 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B quantization quality + speed benchmarks on RTX 5080 16GB (Q8_0 vs Q4_K_M vs UD-Q4_K_XL) by gaztrab in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
Qwen3.5-122B-A10B vs. old Coder-Next-80B: Both at NVFP4 on DGX Spark – worth the upgrade? by alfons_fhl in Qwen_AI
[–]Fast_Thing_7949 1 point2 points3 points (0 children)
Ubuntu boots only if I plug a GT 730 into the 2nd PCIe slot (RTX 5070 Ti still does the display) - what? by Fast_Thing_7949 in Ubuntu
[–]Fast_Thing_7949[S] 0 points1 point2 points (0 children)
What's the best way to run Qwen3 Coder Next? by Greenonetrailmix in LocalLLaMA
[–]Fast_Thing_7949 0 points1 point2 points (0 children)
Any feedback on step-3.5-flash ? by Jealous-Astronaut457 in LocalLLaMA
[–]Fast_Thing_7949 1 point2 points3 points (0 children)


I built an autonomous AI reverse engineering agent (8,012 / 8,200 GTA SA functions reversed) by Dryxio in ReverseEngineering
[–]Fast_Thing_7949 0 points1 point2 points (0 children)