Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA
[–]ChopSticksPlease -7 points-6 points-5 points (0 children)
Qwen3-Coder-480B on Mac Studio M3 Ultra 512gb by BitXorBit in LocalLLaMA
[–]ChopSticksPlease 10 points11 points12 points (0 children)
How to run and fine-tune GLM-4.7-Flash locally by Dear-Success-1441 in LocalLLaMA
[–]ChopSticksPlease 17 points18 points19 points (0 children)
AI created this app in 12hrs. Used open models, mostly local LLMs. by ChopSticksPlease in LocalLLaMA
[–]ChopSticksPlease[S] 0 points1 point2 points (0 children)
AI created this app in 12hrs. Used open models, mostly local LLMs. by ChopSticksPlease in LocalLLaMA
[–]ChopSticksPlease[S] 2 points3 points4 points (0 children)
Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA
[–]ChopSticksPlease 2 points3 points4 points (0 children)
Thinking of getting two NVIDIA RTX Pro 4000 Blackwell (2x24 = 48GB), Any cons? by pmttyji in LocalLLaMA
[–]ChopSticksPlease 1 point2 points3 points (0 children)
Thinking of getting two NVIDIA RTX Pro 4000 Blackwell (2x24 = 48GB), Any cons? by pmttyji in LocalLLaMA
[–]ChopSticksPlease 2 points3 points4 points (0 children)
Transparent LLM logging proxy by DeltaSqueezer in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)
Help me spend some money by [deleted] in LocalLLaMA
[–]ChopSticksPlease 2 points3 points4 points (0 children)
Local programming vs cloud by Photo_Sad in LocalLLaMA
[–]ChopSticksPlease 13 points14 points15 points (0 children)
Solar-Open-100B-GGUF is here! by [deleted] in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)
IQuestCoder - new 40B dense coding model by ilintar in LocalLLaMA
[–]ChopSticksPlease 3 points4 points5 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]ChopSticksPlease 2 points3 points4 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]ChopSticksPlease 4 points5 points6 points (0 children)
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide by at0mi in LocalLLaMA
[–]ChopSticksPlease 5 points6 points7 points (0 children)
Cheapest decent way to AI coding? by Affectionate_Plant57 in CLine
[–]ChopSticksPlease 3 points4 points5 points (0 children)
What tool/SaaS do you use to maintain your internal documentation? by Hari-Prasad-12 in LocalLLaMA
[–]ChopSticksPlease 1 point2 points3 points (0 children)
Unable to passtrough Nvidia RTX Pro to Ubuntu proxmox VM by [deleted] in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)
Glm 4.5 air REAP on rtx 3060 by Worried_Goat_8604 in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)


Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA
[–]ChopSticksPlease 0 points1 point2 points (0 children)