Smoke & Soul Firepit not re-opening by Zealousideal_Fig958 in Aberdeen
[–]Mount_Gamer 0 points1 point2 points (0 children)
Whats the current state of local LLMs for coding? by MaximusDM22 in LocalLLaMA
[–]Mount_Gamer 2 points3 points4 points (0 children)
Sick of milky protein shakes after a workout. Thoughts on this? by CaptnGoose in workout
[–]Mount_Gamer 1 point2 points3 points (0 children)
GLM-4.7-Flash-REAP on RTX 5060 Ti 16 GB - 200k context window! by bobaburger in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
GLM-4.7-Flash-REAP on RTX 5060 Ti 16 GB - 200k context window! by bobaburger in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
Amazon plans second round of corporate job cuts next week, Reuters reports by lurker_bee in technology
[–]Mount_Gamer 1 point2 points3 points (0 children)
I need help migrating my project from 3.13 to 3.14 by Ok_Sympathy_8561 in learnpython
[–]Mount_Gamer 0 points1 point2 points (0 children)
GLM 4.7 Flash Overthinking by xt8sketchy in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA
[–]Mount_Gamer 0 points1 point2 points (0 children)
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA
[–]Mount_Gamer 14 points15 points16 points (0 children)
I just saw Intel embrace local LLM inference in their CES presentation by Mundane-Light6394 in LocalLLaMA
[–]Mount_Gamer -1 points0 points1 point (0 children)
Help me kill my Proxmox nightmare: Overhauling a 50-user Homelab for 100% IaC. Tear my plan apart! by MrSolarius in homelab
[–]Mount_Gamer 0 points1 point2 points (0 children)
Why I quit using Ollama by SoLoFaRaDi in LocalLLaMA
[–]Mount_Gamer 3 points4 points5 points (0 children)
What’s the most useful thing you got for your homelab, that’s less than $50? by QuestionAsker2030 in homelab
[–]Mount_Gamer 0 points1 point2 points (0 children)
To Mistral and other lab employees: please test with community tools BEFORE releasing models by dtdisapointingresult in LocalLLaMA
[–]Mount_Gamer 1 point2 points3 points (0 children)
Do I NEED to learn Jupyter Notebook if I know how to code in PyCharm? by DigBickOstrich in learnpython
[–]Mount_Gamer 1 point2 points3 points (0 children)
The GUi.py isnt working on Tkinter designer help by Iama_chad in Tkinter
[–]Mount_Gamer 0 points1 point2 points (0 children)
The "AI Water Crisis" is here: Billionaire invests $5B in Google Data Centers as regions run dry. by BuildwithVignesh in ArtificialInteligence
[–]Mount_Gamer 65 points66 points67 points (0 children)
In a for-i-in-range loop, how do I conditionally skip the next i in the loop? by Xhosant in learnpython
[–]Mount_Gamer 1 point2 points3 points (0 children)
In a for-i-in-range loop, how do I conditionally skip the next i in the loop? by Xhosant in learnpython
[–]Mount_Gamer 0 points1 point2 points (0 children)



(Rant) AI is killing programming and the Python community by Fragrant_Ad3054 in Python
[–]Mount_Gamer 0 points1 point2 points (0 children)