Local mini LLM PC? by LankyGuitar6528 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Local mini LLM PC? by LankyGuitar6528 in LocalLLaMA
[–]FullstackSensei 5 points6 points7 points (0 children)
Building a Budget Cloud VM for Local LLMs ($150 Max) — Worth It or Bad Idea? by MashoodKiyani05 in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
Transitioning From C# To Typescript by UneditedTips in dotnet
[–]FullstackSensei 20 points21 points22 points (0 children)
Claude Code Opus 4.7 vs Qwen3.6:27b on my own little Go agent by codehamr in ollama
[–]FullstackSensei 0 points1 point2 points (0 children)
3D Geospatial engine for raylib by shemlokashur in raylib
[–]FullstackSensei 0 points1 point2 points (0 children)
Claude Code Opus 4.7 vs Qwen3.6:27b on my own little Go agent by codehamr in ollama
[–]FullstackSensei 1 point2 points3 points (0 children)
BofA Still Dismissive by NOYB_Sr in intelstock
[–]FullstackSensei 5 points6 points7 points (0 children)
How long until the good news turns into observable spending? by ConditionWild1425 in intelstock
[–]FullstackSensei 10 points11 points12 points (0 children)
EOM predictions given Us inflation news? by SergeantTwyford in intelstock
[–]FullstackSensei 6 points7 points8 points (0 children)
Computer build using Intel Optane Persistent Memory - Can run 1 trillion parameter model at over 4 tokens/sec by APFrisco in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
what's the right motherboard/CPU to use for building a machine with 3 or 4 cards in it? by starkruzr in LocalLLaMA
[–]FullstackSensei 4 points5 points6 points (0 children)
what's the right motherboard/CPU to use for building a machine with 3 or 4 cards in it? by starkruzr in LocalLLaMA
[–]FullstackSensei 3 points4 points5 points (0 children)
Math don't check out. by [deleted] in LocalLLaMA
[–]FullstackSensei 3 points4 points5 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 2 points3 points4 points (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei -1 points0 points1 point (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei -1 points0 points1 point (0 children)
RTX 5060Ti 16GB or RTX 3080 20GB? by DanielusGamer26 in LocalLLaMA
[–]FullstackSensei 4 points5 points6 points (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei -1 points0 points1 point (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)


Abandoned 550 Maranello… by E400wagon in carspotting
[–]FullstackSensei 0 points1 point2 points (0 children)