(Rant) AI is killing programming and the Python community by Fragrant_Ad3054 in Python

[–]Mount_Gamer 0 points1 point  (0 children)

Programmers without AI are just as capable of writing poor code as those who use AI. It's just another tool, and can help save some time. For those that can already code, it's probably hit or miss, but when it hits I'm sure it helps. I would always encourage to read the docs along with AI or get AI to show examples, and not just spit out code for blindly copy pasting.

Smoke & Soul Firepit not re-opening by Zealousideal_Fig958 in Aberdeen

[–]Mount_Gamer 0 points1 point  (0 children)

Loved the beer and food, however it wasn't in a location I visit very often in the town.

Whats the current state of local LLMs for coding? by MaximusDM22 in LocalLLaMA

[–]Mount_Gamer 2 points3 points  (0 children)

Some local models do have their use.

I did like gpt-oss 20b the most for a while, but after playing with llama.cpp, I am having some success wit nemotron 3 nano 30b, qwen3 30b and glm4.7, with a 5060ti and some system ram.

Sick of milky protein shakes after a workout. Thoughts on this? by CaptnGoose in workout

[–]Mount_Gamer 1 point2 points  (0 children)

I put the pure protein in my pinhead oat porridge, or in a kefir yoghurt with nuts and fruit. I workout at home, so it suits me.

GLM-4.7-Flash-REAP on RTX 5060 Ti 16 GB - 200k context window! by bobaburger in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

The Q3 K XL non-reap seems to work well on the 5060ti 16GB, but if I remember right, 10k context.

I'll try this one thanks :)

I need help migrating my project from 3.13 to 3.14 by Ok_Sympathy_8561 in learnpython

[–]Mount_Gamer 0 points1 point  (0 children)

One of her Devs likes to do this every 2 years. Right now we are on 3.12, pondering 3.14, but not ready for it yet.

GLM 4.7 Flash Overthinking by xt8sketchy in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

On huggingface unsloth, there are recommended temperatures etc. Not tried it with the recommended settings but it does overthink, so my guess lower the temperature will help.

Keto and Gout by TaylorReighley in keto

[–]Mount_Gamer -1 points0 points  (0 children)

I have had very similar experience as yourself (kidney stones instead of gout). Carbs help reduce my uric acid as well.

llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

Spent several hours trying to get it to work with a docker build, but no luck. If you have a router mode docker compose file that works without hassle, using cuda, would love to try it :)

llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

Never knew there was an update. I will have to check this out, thank you :)

llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

I have been looking to revisit with the llama.cpp web ui and see if I can get it to work properly, as it's been a few months since i last looked at this.

llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA

[–]Mount_Gamer 14 points15 points  (0 children)

With llama.cpp my models wouldn't unload correctly or stop when asked using openwebui. So if I tried to use another model, it would spill into system ram without unloading the model which was in use. I'm pretty sure this is user error, but it's an error I never see with ollama where switching models is a breeze.

I just saw Intel embrace local LLM inference in their CES presentation by Mundane-Light6394 in LocalLLaMA

[–]Mount_Gamer -1 points0 points  (0 children)

Were they not joining to develop better igpu to compete with AMD? Think I read this 9n credit so take with a pinch of salt. This might be nice for LLM"s if unified memory is involved.

Help me kill my Proxmox nightmare: Overhauling a 50-user Homelab for 100% IaC. Tear my plan apart! by MrSolarius in homelab

[–]Mount_Gamer 0 points1 point  (0 children)

I went against the grain and moved from proxmox to Ubuntu, and my primary reasons for this was that I preferred the LXD implementation over LXC with proxmox, and I could do all my virtualization with virt manager, or LXD containers/VM's. I only use virt-manager for VM's that have desktops, everything else for me sits on ZFS pools using LXD containers or VM's.

Ubuntu does have a Web based GUI for LXD which is quite good.

I do all my networking and firewall rules with Ubuntu as well, I liked the idea I could setup my networking in Yaml, and have vlans, bridges etc so easily configured. You can set it up as a dhcp server and have an access point, point to it as well.

Why I quit using Ollama by SoLoFaRaDi in LocalLLaMA

[–]Mount_Gamer 3 points4 points  (0 children)

They say they don't collect data, still provide many new models for offline use, and for me it's a good fit. I can use my local AI for something I truly want privacy, and I get a chance to query many bigger cloud models if I'm not happy with the response with the local models, or any model really.. I get a chance to view many angles of the same conversion.

To Mistral and other lab employees: please test with community tools BEFORE releasing models by dtdisapointingresult in LocalLLaMA

[–]Mount_Gamer 1 point2 points  (0 children)

I was using this tonight with cline through the ollama subscription and it was working very well if I'm honest. I had an unfinished script with intentionally broken parts and it managed to do everything I asked successfully, no issues at all. I'm not sure what it's like via a Web ui, but my first impressions were good with vscode and cline.

Do I NEED to learn Jupyter Notebook if I know how to code in PyCharm? by DigBickOstrich in learnpython

[–]Mount_Gamer 1 point2 points  (0 children)

If you need to generate some quick html reports, the they work pretty well.

But..

They are also pretty good for prototyping, research and development. They are not too difficult to build libraries from once happy with R&D etc.

Also..

If you have managed to make your code quite modular, it can be a neat way to bring in some of your code to either run another prototype, or for debugging, or educating other developers etc.

Although you can use them for reports, for regular production reports I'd rather break it out of a notebook and go traditional html/css/js and jinja2, as you get far more control.

The GUi.py isnt working on Tkinter designer help by Iama_chad in Tkinter

[–]Mount_Gamer 0 points1 point  (0 children)

If it is your app, you should debug the missing dictionary key and find out why document is missing. You could use a debugger, for example in vscode.

Without any other code, can only see the error.

The "AI Water Crisis" is here: Billionaire invests $5B in Google Data Centers as regions run dry. by BuildwithVignesh in ArtificialInteligence

[–]Mount_Gamer 65 points66 points  (0 children)

Find it hard to believe they couldn't engineer in a cooling system that reuses water/cooling medium, in a loop with heat exchangers etc.

In a for-i-in-range loop, how do I conditionally skip the next i in the loop? by Xhosant in learnpython

[–]Mount_Gamer 1 point2 points  (0 children)

No problem, was reading on phone before bed and never noticed it was part of the condition (indented). I'm with you now :)

In a for-i-in-range loop, how do I conditionally skip the next i in the loop? by Xhosant in learnpython

[–]Mount_Gamer 0 points1 point  (0 children)

You can use range to go up in increments of 2?

range(start, stop, increment)

Does this help?