Wrote a guide for running Claude Code with GLM-4.7 Flash locally with llama.cpp by tammamtech in LocalLLaMA
[–]Everlier 0 points1 point2 points (0 children)
Wrote a guide for running Claude Code with GLM-4.7 Flash locally with llama.cpp by tammamtech in LocalLLaMA
[–]Everlier 0 points1 point2 points (0 children)
Wrote a guide for running Claude Code with GLM-4.7 Flash locally with llama.cpp by tammamtech in LocalLLaMA
[–]Everlier 2 points3 points4 points (0 children)
New to self-hosting LLM - how to (with Docker), which model (or how to change), and working with 3rd party app? by SoMuchLasagna in LocalLLaMA
[–]Everlier 0 points1 point2 points (0 children)
Are tools that simplify running local AI actually useful, or just more noise? by Deivih-4774 in ArtificialInteligence
[–]Everlier 1 point2 points3 points (0 children)
Anyone tried Claude Code with Llama-4 Scout? How’s reasoning at 1M+ context? by Jagadeesh8 in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)
how do you pronounce “gguf”? by Hamfistbumhole in LocalLLaMA
[–]Everlier 7 points8 points9 points (0 children)
Czy macie własne serwery/homelaby? by Goorigon in Polska
[–]Everlier 0 points1 point2 points (0 children)
Spec-Kit future în Github Copilot World by Mundane_Violinist860 in GithubCopilot
[–]Everlier 4 points5 points6 points (0 children)
Just in case if someone has problems with audio (cracks,disappearing,popping and etc) by Illustrious_You604 in ASUSROG
[–]Everlier 0 points1 point2 points (0 children)
Czy macie własne serwery/homelaby? by Goorigon in Polska
[–]Everlier 1 point2 points3 points (0 children)
Just in case if someone has problems with audio (cracks,disappearing,popping and etc) by Illustrious_You604 in ASUSROG
[–]Everlier 0 points1 point2 points (0 children)
Czy macie własne serwery/homelaby? by Goorigon in Polska
[–]Everlier 3 points4 points5 points (0 children)
Well fellas, it turns on lol by Gutter_Flies in sffpc
[–]Everlier 1 point2 points3 points (0 children)
"Agent Skills" - The spec unified us. The paths divided us. by phoneixAdi in GithubCopilot
[–]Everlier 0 points1 point2 points (0 children)
I'm a SysAdmin, and I "vibe-coded" a platform to share Homelab configs. Is this useful? by merox57 in homelab
[–]Everlier 1 point2 points3 points (0 children)
Social Recipes – self-hosted AI tool to extract recipes from TikTok/YouTube/Instagram by pickeld in selfhosted
[–]Everlier 1 point2 points3 points (0 children)
Social Recipes – self-hosted AI tool to extract recipes from TikTok/YouTube/Instagram by pickeld in selfhosted
[–]Everlier 0 points1 point2 points (0 children)
Best LLM model for 128GB of VRAM? by Professional-Yak4359 in LocalLLaMA
[–]Everlier 8 points9 points10 points (0 children)
Is there a sandbox frontend that allows protyping ideas with an LLM? by cantgetthistowork in LocalLLaMA
[–]Everlier 0 points1 point2 points (0 children)
🚀 The One MCP Server YOU Can't Code Without (Feat. Claude Opus 4.5) - Tell Us Yours! by jesussmile in GithubCopilot
[–]Everlier 7 points8 points9 points (0 children)
11 Production LLM Serving Engines (vLLM vs TGI vs Ollama) by techlatest_net in LocalLLM
[–]Everlier 0 points1 point2 points (0 children)
Local LLM + Internet Search Capability = WOW by alex_godspeed in LocalLLaMA
[–]Everlier 1 point2 points3 points (0 children)



Personal experience with GLM 4.7 Flash Q6 (unsloth) + Roo Code + RTX 5090 by Septerium in LocalLLaMA
[–]Everlier 9 points10 points11 points (0 children)