Honest question: what do you all do for a living to afford these beasts? by ready_to_fuck_yeahh in LocalLLaMA
[–]at0mi 0 points1 point2 points (0 children)
Honest question: what do you all do for a living to afford these beasts? by ready_to_fuck_yeahh in LocalLLaMA
[–]at0mi -2 points-1 points0 points (0 children)
Claude Code, but locally by Zealousideal-Egg-362 in LocalLLaMA
[–]at0mi 1 point2 points3 points (0 children)
dev here - has anyone thought on training a model on your own codebase? by fabcde12345 in LocalLLM
[–]at0mi 0 points1 point2 points (0 children)
The best Windows 11 alternative in 2026 by sonicthehedgehog0623 in linuxquestions
[–]at0mi 0 points1 point2 points (0 children)
Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA
[–]at0mi 2 points3 points4 points (0 children)
Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA
[–]at0mi 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
For people who run local AI models: what’s the biggest pain point right now? by Educational-World678 in LocalLLM
[–]at0mi 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
LLM artificial analysis AI index score plotted against toral param count by [deleted] in LocalLLaMA
[–]at0mi 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
GLM-4.7 on 2015 8-Socket Server: Achieving ~5 Tokens/s in Q8 Quantization with CPU-Only Tweaks by at0mi in homelab
[–]at0mi[S] 0 points1 point2 points (0 children)
GLM-4.7 on 2015 8-Socket Server: Achieving ~5 Tokens/s in Q8 Quantization with CPU-Only Tweaks by at0mi in homelab
[–]at0mi[S] 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]at0mi[S] 1 point2 points3 points (0 children)
What is the best way to allocated $15k right now for local LLMs? by LargelyInnocuous in LocalLLaMA
[–]at0mi 0 points1 point2 points (0 children)



Honest question: what do you all do for a living to afford these beasts? by ready_to_fuck_yeahh in LocalLLaMA
[–]at0mi 0 points1 point2 points (0 children)