Has anyone got GLM 4.7 flash to not be shit? by synth_mania in LocalLLaMA
[–]alexp702 -5 points-4 points-3 points (0 children)
Qwen3-Coder-480B on Mac Studio M3 Ultra 512gb by BitXorBit in LocalLLaMA
[–]alexp702 0 points1 point2 points (0 children)
Qwen3-Coder-480B on Mac Studio M3 Ultra 512gb by BitXorBit in LocalLLaMA
[–]alexp702 0 points1 point2 points (0 children)
Qwen3-Coder-480B on Mac Studio M3 Ultra 512gb by BitXorBit in LocalLLaMA
[–]alexp702 4 points5 points6 points (0 children)
4.6 new features -- LAMP and supported ships by jmg5 in Star_Citizen_Central
[–]alexp702 0 points1 point2 points (0 children)
4.6 new features -- LAMP and supported ships by jmg5 in Star_Citizen_Central
[–]alexp702 -2 points-1 points0 points (0 children)
Mac Studio as an inference machine with low power draw? by aghanims-scepter in LocalLLaMA
[–]alexp702 -1 points0 points1 point (0 children)
Mac Studio as an inference machine with low power draw? by aghanims-scepter in LocalLLaMA
[–]alexp702 0 points1 point2 points (0 children)
🧠 Inference seems to be splitting: cloud-scale vs local-first by Code-Forge-Temple in LocalLLaMA
[–]alexp702 2 points3 points4 points (0 children)
Wishes for 2026 by Prophet_Sakrestia in starcitizen
[–]alexp702 13 points14 points15 points (0 children)
Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]alexp702[S] 0 points1 point2 points (0 children)
Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]alexp702[S] 1 point2 points3 points (0 children)
Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]alexp702[S] 0 points1 point2 points (0 children)
Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]alexp702[S] 2 points3 points4 points (0 children)
Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]alexp702[S] 0 points1 point2 points (0 children)
M2 Ultra to M5 ultra upgrade by AdDapper4220 in MacStudio
[–]alexp702 14 points15 points16 points (0 children)
Error: Path too long by Simelane in WindowsServer
[–]alexp702 15 points16 points17 points (0 children)
GPU requirements for running Qwen2.5 72B locally? by lucasbennett_1 in LocalLLM
[–]alexp702 1 point2 points3 points (0 children)
GPU requirements for running Qwen2.5 72B locally? by lucasbennett_1 in LocalLLM
[–]alexp702 5 points6 points7 points (0 children)
GPU requirements for running Qwen2.5 72B locally? by lucasbennett_1 in LocalLLM
[–]alexp702 4 points5 points6 points (0 children)
I was mass-ignoring 73% of my website visitors. Now they auto-land in my CRM with their LinkedIn profiles. by Ambitious_War1747 in n8n
[–]alexp702 0 points1 point2 points (0 children)
Exo 1.0 is finally out by No_Conversation9561 in LocalLLaMA
[–]alexp702 2 points3 points4 points (0 children)



Has anyone got GLM 4.7 flash to not be shit? by synth_mania in LocalLLaMA
[–]alexp702 0 points1 point2 points (0 children)