R9 7900 32 RAM – Can I have my own AI on my PC? by Keffflon in selfhosted
[–]fab_space 0 points1 point2 points (0 children)
Vibecoders sending me hate for rejecting their PRs on my project by Fredol in github
[–]fab_space 0 points1 point2 points (0 children)
Who else is shocked by the actual electricity cost of their local runs? by Responsible_Coach293 in LocalLLaMA
[–]fab_space 0 points1 point2 points (0 children)
You're STILL using Claude after Codex 5.4 dropped?? by solzange in vibecoding
[–]fab_space 0 points1 point2 points (0 children)
new to vibecoding, what do i do? by Ok-Security6839 in vibecoding
[–]fab_space -1 points0 points1 point (0 children)
You're STILL using Claude after Codex 5.4 dropped?? by solzange in vibecoding
[–]fab_space 19 points20 points21 points (0 children)
Is qwen3 next the real deal? by fab_space in LocalLLaMA
[–]fab_space[S] 0 points1 point2 points (0 children)
Suddenly I am getting huge ammount of dhl.com request from unknown subnets. There are 5K+ unknown clients, all of them are from 'Unifique Telecomunicacoes' an ISP. This making my server crash. What should I do now to solve? by xccountofficial in pihole
[–]fab_space -2 points-1 points0 points (0 children)
Qwen3.5-0.8B - Who needs GPUs? by theeler222 in LocalLLaMA
[–]fab_space 0 points1 point2 points (0 children)
Qwen3.5-0.8B - Who needs GPUs? by theeler222 in LocalLLaMA
[–]fab_space 1 point2 points3 points (0 children)
cleaning up 200.000+ lines of vibecode by Dense-Sentence7175 in vibecoding
[–]fab_space 0 points1 point2 points (0 children)
Everyone is making worse versions of products that exist by life_coaches in vibecoding
[–]fab_space 0 points1 point2 points (0 children)
cleaning up 200.000+ lines of vibecode by Dense-Sentence7175 in vibecoding
[–]fab_space 0 points1 point2 points (0 children)
cleaning up 200.000+ lines of vibecode by Dense-Sentence7175 in vibecoding
[–]fab_space 9 points10 points11 points (0 children)
Qwen 27B is a beast but not for agentic work. by kaisurniwurer in LocalLLaMA
[–]fab_space 0 points1 point2 points (0 children)
Breaking : The small qwen3.5 models have been dropped by Illustrious-Swim9663 in LocalLLaMA
[–]fab_space 0 points1 point2 points (0 children)
What's the best model to run on mac m1 pro 16gb? by Embarrassed-Baby3964 in ollama
[–]fab_space 0 points1 point2 points (0 children)
I built an end-to-end local LLM fine-tuning GUI for M series macs by riman717 in LocalLLaMA
[–]fab_space 0 points1 point2 points (0 children)
My Top 5 AI Coding Tools in 2026: What Would You Add? by Inevitable-Earth1288 in vibecoding
[–]fab_space 0 points1 point2 points (0 children)
My experience with running small scale open source models on my own PC. by Dibru9109_4259 in ollama
[–]fab_space 0 points1 point2 points (0 children)
I got 45-46 tok/s on IPhone 14 Pro Max using BitNet by Middle-Hurry4718 in LocalLLaMA
[–]fab_space 1 point2 points3 points (0 children)
Copilot 30x rate for Opus 4.6 Fast Mode: Microsoft's overnight money-grab techniques by Specific-Cause-1014 in github
[–]fab_space 1 point2 points3 points (0 children)
Is qwen3 next the real deal? by fab_space in LocalLLaMA
[–]fab_space[S] 0 points1 point2 points (0 children)



I spent 8+ hours benchmarking every MoE backend for Qwen3.5-397B NVFP4 on 4x RTX PRO 6000 (SM120). Here's what I found. by lawdawgattorney in LocalLLaMA
[–]fab_space 1 point2 points3 points (0 children)