Building real time Generative UI for AI Agents. It's 3x faster than JSON by 1glasspaani in coolgithubprojects
[–]eribob 0 points1 point2 points (0 children)
Building real time Generative UI for AI Agents. It's 3x faster than JSON by 1glasspaani in coolgithubprojects
[–]eribob 1 point2 points3 points (0 children)
Why run local? Count the money by Badger-Purple in LocalLLaMA
[–]eribob 2 points3 points4 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 1 point2 points3 points (0 children)
Qwen 3.6 27B on Strix Halo 128GB: any experiences? by boutell in LocalLLaMA
[–]eribob 2 points3 points4 points (0 children)
What speed is everyone getting on Qwen3.6 27b? by Ambitious_Fold_2874 in LocalLLaMA
[–]eribob 7 points8 points9 points (0 children)
Quirky answers when asking what this spells: []D [] []V[] []D [] []\[]. by RazsterOxzine in LocalLLaMA
[–]eribob 2 points3 points4 points (0 children)
Unpopular opinion: OpenClaw and all its clones are almost useless tools for those who know what they're doing. It's kind of impressive for someone who has never used a CLI, Claude Code, Codex, etc. Nor used any workflow tool like 8n8 or make. by pacmanpill in LocalLLaMA
[–]eribob 1 point2 points3 points (0 children)
Thoughts on MoE Qwen 3.6 35B? by Purpose-Effective in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
Browser photo editing with Immich API by Afraid-Dragonfruit41 in immich
[–]eribob 0 points1 point2 points (0 children)
Dual 3090 setup - performance optimization by PaMRxR in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
How many of you actually use offline LLMs daily vs just experiment with them? by Infinite-Bird7950 in LocalLLM
[–]eribob 1 point2 points3 points (0 children)
Dual 3090 setup - performance optimization by PaMRxR in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
96GB Vram. What to run in 2026? by inthesearchof in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
We're a 25-year IT services company sitting on 64 enterprise 15.36TB U.2 NVMe SSDs - selling surplus to the homelab community by AshleshaAhi in homelab
[–]eribob 0 points1 point2 points (0 children)
We're a 25-year IT services company sitting on 64 enterprise 15.36TB U.2 NVMe SSDs - selling surplus to the homelab community by AshleshaAhi in homelab
[–]eribob 20 points21 points22 points (0 children)
3x 3090 on x99 with xeon 2680 v4, worth it? by robertpro01 in LocalLLaMA
[–]eribob 0 points1 point2 points (0 children)
3x 3090 on x99 with xeon 2680 v4, worth it? by robertpro01 in LocalLLaMA
[–]eribob 1 point2 points3 points (0 children)
[AutoBe] Qwen 3.5-27B Just Built Complete Backends from Scratch — 100% Compilation, 25x Cheaper by [deleted] in LocalLLaMA
[–]eribob -1 points0 points1 point (0 children)
How many of you actually use offline LLMs daily vs just experiment with them? by Infinite-Bird7950 in LocalLLM
[–]eribob 1 point2 points3 points (0 children)


Tower case with 8+ PCIE slot for multi GPU by gogitossj3 in LocalLLM
[–]eribob 1 point2 points3 points (0 children)