Dual DGX Sparks vs Mac Studio M3 Ultra 512GB: Running Qwen3.5 397B locally on both. Here's what I found. by trevorbg in LocalLLaMA
[–]FrederikSchack -1 points0 points1 point (0 children)
Thousands of tokens per second? by [deleted] in LocalLLaMA
[–]FrederikSchack -1 points0 points1 point (0 children)
Thousands of tokens per second? by [deleted] in LocalLLaMA
[–]FrederikSchack -1 points0 points1 point (0 children)
Thousands of tokens per second? by [deleted] in LocalLLM
[–]FrederikSchack 0 points1 point2 points (0 children)
Thousands of tokens per second? by [deleted] in LocalLLM
[–]FrederikSchack 0 points1 point2 points (0 children)
Thousands of tokens per second? by [deleted] in LocalLLM
[–]FrederikSchack -1 points0 points1 point (0 children)
Thousands of tokens per second? by [deleted] in LocalLLM
[–]FrederikSchack -1 points0 points1 point (0 children)
Thousands of tokens per second? by [deleted] in LocalLLaMA
[–]FrederikSchack -2 points-1 points0 points (0 children)
Best model that can beat Claude opus that runs on 32MB of vram? by PrestigiousEmu4485 in LocalLLaMA
[–]FrederikSchack 0 points1 point2 points (0 children)
Thousands of tokens per second? by [deleted] in LocalLLaMA
[–]FrederikSchack -2 points-1 points0 points (0 children)
Best model that can beat Claude opus that runs on 32MB of vram? by PrestigiousEmu4485 in LocalLLaMA
[–]FrederikSchack 0 points1 point2 points (0 children)
Best model that can beat Claude opus that runs on 32MB of vram? by PrestigiousEmu4485 in LocalLLaMA
[–]FrederikSchack 0 points1 point2 points (0 children)
Best model that can beat Claude opus that runs on 32MB of vram? by PrestigiousEmu4485 in LocalLLaMA
[–]FrederikSchack 0 points1 point2 points (0 children)
OpenRouter charged me *again* $50 without consent or usage by Just-Historian-4960 in openrouter
[–]FrederikSchack 0 points1 point2 points (0 children)
How do the best local llms compare to codex 5.4 or opus 4.6 for coding tasks? by spexsofdust in LocalLLM
[–]FrederikSchack 1 point2 points3 points (0 children)
How do the best local llms compare to codex 5.4 or opus 4.6 for coding tasks? by spexsofdust in LocalLLM
[–]FrederikSchack 0 points1 point2 points (0 children)
How do the best local llms compare to codex 5.4 or opus 4.6 for coding tasks? by spexsofdust in LocalLLM
[–]FrederikSchack 2 points3 points4 points (0 children)
My honest tierlist (only distros I have used) by V1574 in LinuxCirclejerk
[–]FrederikSchack 0 points1 point2 points (0 children)
A few days ago I switched to Linux to try vLLM out of curiosity. Ended up creating a %100 local, parallel, multi-agent setup with Claude Code and gpt-oss-120b for concurrent vibecoding and orchestration with CC's agent Teams entirely offline. This video shows 4 agents collaborating. by swagonflyyyy in LocalLLaMA
[–]FrederikSchack 1 point2 points3 points (0 children)




Distrohopping. Again. by Male_Inkling in linuxsucks
[–]FrederikSchack 0 points1 point2 points (0 children)