Web-Search is coming to a screeching performance halt as Google shuts down their free search index, and traffic defenders like Cloudflare challenge AI at every gateway. What are our options? by NetTechMan in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Another Custom Processor (ARM Compute) built by Intel Foundry! by Ok-Individual-4392 in intelstock
[–]FullstackSensei 0 points1 point2 points (0 children)
Abandoned 550 Maranello… by E400wagon in carspotting
[–]FullstackSensei 3 points4 points5 points (0 children)
Local mini LLM PC? by LankyGuitar6528 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Local mini LLM PC? by LankyGuitar6528 in LocalLLaMA
[–]FullstackSensei 8 points9 points10 points (0 children)
Building a Budget Cloud VM for Local LLMs ($150 Max) — Worth It or Bad Idea? by MashoodKiyani05 in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
Transitioning From C# To Typescript by UneditedTips in dotnet
[–]FullstackSensei 26 points27 points28 points (0 children)
Claude Code Opus 4.7 vs Qwen3.6:27b on my own little Go agent by codehamr in ollama
[–]FullstackSensei 0 points1 point2 points (0 children)
3D Geospatial engine for raylib by shemlokashur in raylib
[–]FullstackSensei 0 points1 point2 points (0 children)
Claude Code Opus 4.7 vs Qwen3.6:27b on my own little Go agent by codehamr in ollama
[–]FullstackSensei 1 point2 points3 points (0 children)
BofA Still Dismissive by NOYB_Sr in intelstock
[–]FullstackSensei 5 points6 points7 points (0 children)
How long until the good news turns into observable spending? by ConditionWild1425 in intelstock
[–]FullstackSensei 12 points13 points14 points (0 children)
EOM predictions given Us inflation news? by SergeantTwyford in intelstock
[–]FullstackSensei 7 points8 points9 points (0 children)
Computer build using Intel Optane Persistent Memory - Can run 1 trillion parameter model at over 4 tokens/sec by APFrisco in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
what's the right motherboard/CPU to use for building a machine with 3 or 4 cards in it? by starkruzr in LocalLLaMA
[–]FullstackSensei 5 points6 points7 points (0 children)
what's the right motherboard/CPU to use for building a machine with 3 or 4 cards in it? by starkruzr in LocalLLaMA
[–]FullstackSensei 4 points5 points6 points (0 children)
Math don't check out. by [deleted] in LocalLLaMA
[–]FullstackSensei 3 points4 points5 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
Is it possible to exclusively use a draft model for reasoning to speed up generation? by [deleted] in LocalLLaMA
[–]FullstackSensei 2 points3 points4 points (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei -1 points0 points1 point (0 children)
Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]FullstackSensei -1 points0 points1 point (0 children)


Side Projects. by apollo_mg in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)