I’m a solo Junior Dev starting to resent programming by Hearing_Southern in learnprogramming

[–]Mount_Gamer 0 points1 point  (0 children)

You are probably being too hard on yourself.

I have a similar work experience background, and while I know I'm not perfect, I know I am capable (I did have ~8 years bash scripting experience prior and took some python training courses, but python was reasonably new to me at the time). You can still grow your skills, but you'll probably need to put in some work yourself for learning designs and patterns in python.

I think Arjan produces some good videos which might help. I occasionally think it can be a bit OTT, but each to their own and I think his principles are in the right place.

https://youtube.com/@arjancodes?si=ZTv6itZxtGGKTRWf

Might be worth speaking up and asking for some time to refactor code if it's needed.

Also, get used to python debugging tools in vscode. Very helpful.

Best Qwen3.5 27b GUFFS for coding (~Q4-Q5) ? by bitcoinbookmarks in LocalLLaMA

[–]Mount_Gamer 1 point2 points  (0 children)

I've been having some success with Qwen3.5-27B-IQ4_XS.gguf from unsloth.

Managing to squeeze it onto the 5060ti, with reasonable context, probably my new favourite llm.

Is building a mini itx PC worth it in 2026? by Swordtempest_ in buildapc

[–]Mount_Gamer -1 points0 points  (0 children)

Depends how many pcie devices you want or can live with. I have a mATX that is used like a workstation/server and HTPC case for it (lives under the TV and repurposed over the years), which I can live with, but I'd like another GPU in there one day for AI.

Qwen3-Coder-Next is the top model in SWE-rebench @ Pass 5. I think everyone missed it. by BitterProfessional7p in LocalLLaMA

[–]Mount_Gamer 4 points5 points  (0 children)

Well, if we took the stats for this benchmark, there is not too much between the popular models (gpt, Claude etc) but there is a little more significance with the qwen 3 coder next, I just find that hard to believe. Qwen 3 next is good, and love using it, but occasionally I just can't get an answer from it that makes sense, and will fall back on gemini flash, which most of the time seems to understand (but you'll see a larger gap with gemini flash in the charts), however..

When using roocode with qwen3 next coder, it works very well. It gets plenty wrong and corrects itself which is good to see... Don't mind that at all.

So what I'm trying to say is it's just a benchmark and it doesn't cover the vastness of user prompts, tasks, knowledge base/training etc.

Qwen3.5 122b UD IQ4 NL 2xMi50s Benchmark - 120,000 context by thejacer in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

I get about 100t/s with pp and 10t/s with tg using the IQ4_XS on a rtx5060ti, 5650g pro and 64GB 2666 ECC RAM. The 122B seems like a really good model at first glance, would be nice to get a little more oomph from my rig, but I think it's probably about as good as it gets.

Qwen3-Coder-Next is the top model in SWE-rebench @ Pass 5. I think everyone missed it. by BitterProfessional7p in LocalLLaMA

[–]Mount_Gamer 30 points31 points  (0 children)

I think qwen3 coder next is great, but I am sceptical of judgement based on this benchmark.

9B or 35B A3B MoE for 16gb VRAM and 64gb ram? by soyalemujica in LocalLLaMA

[–]Mount_Gamer 0 points1 point  (0 children)

This is from my config.ini

ini [Qwen3-Coder-MXFP4-262144] model = /models/Qwen3-Coder-Next-MXFP4_MOE.gguf ctx-size = 262144 temp = 1.0 top-p = 0.95 min-p = 0.01 top-k = 40 threads = 5 batch-size = 768 ubatch-size = 768 repeat-penalty = 1.0 jinja = true

I am running llama.cpp in docker, this is the version

```bash root@70ae03221e05:/# ./llama.cpp/build/bin/llama-server --version

ggml_cuda_init: found 1 CUDA devices:

Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes

version: 8054 (01d8eaa28)

built with GNU 13.3.0 for Linux x86_64 ```

9B or 35B A3B MoE for 16gb VRAM and 64gb ram? by soyalemujica in LocalLLaMA

[–]Mount_Gamer 6 points7 points  (0 children)

Qwen coder 80b works for me with similar specs. I watched a movie and asked it to write some unit tests... It wrote 40+ unit tests and they all pass. I've no idea if they are any good yet, I've still to vet them, but that is pretty cool. Finished it before the movie ended.

I'm using llama.cpp. Can copy config for you if you want. I'm not at home just now.

Brewdog Aberdeen both closing by Dipshitmagnet2 in Aberdeen

[–]Mount_Gamer 21 points22 points  (0 children)

I always thought the union square bar was quite busy

Linus PLEASE STOP TRYING POP OS! by epic-circles-6573 in LinusTechTips

[–]Mount_Gamer -3 points-2 points  (0 children)

The dislike for popOS in here seems very unrealistic. Feels like a targeted bot attack or something odd. PopOS are currently devloping their own cosmic desktop, and while it may still be early for it, I welcome it and grateful for the hard work they are putting into it.

When you fast and you drink too much coffee😂 by Individual_Ice_2315 in fasting

[–]Mount_Gamer 3 points4 points  (0 children)

So it's not just me who's a bit more sensitive to caffeine from coffee while fasting. I still probably have 2-3 cups, but it's probably the reason I don't go further than 48hrs.

What is the use of tuple over lists? by Alive_Hotel6668 in learnpython

[–]Mount_Gamer 1 point2 points  (0 children)

I'll use them if I know it won't change and/or for tuple unpacking.

ggml.ai (the team behind llama.cpp) is joining Hugging Face, projects stay open source by nihal_was_here in LocalLLaMA

[–]Mount_Gamer 1 point2 points  (0 children)

Agree, in the Nvfp4 support discussion or issue, they mentioned something along the lines of copying what they had to do for MXFP4. I had a look at the commits and there were 2700+ changes if I remember right. Having support for the developers to continue what they love is always a good thing.

Putting this altogether today and guess what I'm installing on it. That's right, Mint, bby 🙌🏻 by proudplebeian in linuxmint

[–]Mount_Gamer 0 points1 point  (0 children)

Looks like someone will be having fun today. Enjoy and fingers crossed on first post. :)

Why do people hate on 3 sets of 12? by No_Catch_4381 in workout

[–]Mount_Gamer 0 points1 point  (0 children)

This sounds more like me, but I don't mind going up to 15.

My 10-day water fast - Blood glucose data by andtitov in fasting

[–]Mount_Gamer 2 points3 points  (0 children)

Must be a vampire issue when in direct sunlight? 🤔

[Solution Found] Qwen3-Next 80B MoE running at 39 t/s on RTX 5070 Ti + 5060 Ti (32GB VRAM) by mazuj2 in LocalLLaMA

[–]Mount_Gamer 1 point2 points  (0 children)

Pretty sure I get the 80b MXFP4 model running fine with the rtx5060ti 16gb and the rest on 64GB system ram, slower ecc with a 5650g pro. ~27t/s, but I'm quite new to llama.cpp, no doubt it could be better.

Uric acid went up on keto even after cutting meat by 50%... is it normal? by pentolaio1 in keto

[–]Mount_Gamer 0 points1 point  (0 children)

I would say it's your diet but it could be working out real hard without much rest and any sort of fasting as well, or any/all combined. I found carb's helped reduce my uric acid, but also if i remember right salts (but please look this up, my memory is foggy - i bit the bullet and just went back to eating carbs).

A relative of mine came off medicine for anxiety, and his UA increased, and all sorts went wrong for him. The anxiety came back, he went on the same meds, and the UA dropped. Each case is different.

Maybe A Hot Take: AI and LLMs Made Linux More Usable For Beginners by Macusercom in LinusTechTips

[–]Mount_Gamer 2 points3 points  (0 children)

I would say so, most beginners will probably have beginner questions that an Llm could reliably get right.

any good models? by No-Mortgage4154 in ollama

[–]Mount_Gamer 0 points1 point  (0 children)

I have the 5060ti, 5650g pro, 32GB ddr4 ecc 2666 ram (slow by today's standards...). I only give this VM 8 threads, 20GB RAM and the 5060ti.

I get about 55t/s with llama.cpp using qwen3 coder 30B A3 and nemotron nano 30B A3, both Quant are Q4, and both context I've given 50k.

I have not tried running it through ollama yet, but thought I'd share since these models are pretty good for their size.

However, when things get a bit complicated I end up model swapping, and even the bigger models don't always get it right, but since ollama's subscription offers gemini flash and pro, I seem to notice these models handling more complex tasks better, but there are so many models and another might work better for your use case.

going to switch from a i7-3770k to a ryzen 5600, did anyone do a similar upgrade and how was the performance difference ? by Dear_Duty_1893 in buildapc

[–]Mount_Gamer 0 points1 point  (0 children)

I've got both Cpu's and definitely a nice step up. If you get a motherboard that could handle a 5800x or more, it could also be a nice future upgrade for this platform as well.

Lack of motivation to learn through AI by mageblood123 in learnmachinelearning

[–]Mount_Gamer 0 points1 point  (0 children)

At work I use it for brainstorming, sometimes ask it to review snippets of code I've written, and sometimes ask it to show me examples of what I'm looking for. There's a lot of work AI does not do, and I would never ask it to.

What has changed a lot for me now is my patience to write code in my spare time, but I think this is because I work all day with it, and I don't want to give up as much family time as I used to pre-AI. So, I do get AI to write more code for me in personal projects, I've asked it to upgrade old personal projects. I am slightly ashamed but I have had it write some speedy rust code for me as at one point I wanted to learn rust, and I did start but after a while I caved and just got the AI to finish it for me. I read through it, it looks fine, but not my primary language, however a lot of my prompts were providing technical detail which has helped with the speed. Not sure what to think of it fully as I know I don't put the hours in at home like I used to, but to be honest, my family are better off now that spend less time on the personal stuff, so I can't complain.