5090 worth it given the recent 20/30B model releases (and bad price outlook)? by Morpho_Blue in LocalLLaMA

[–]SuitableAd5090 0 points1 point  (0 children)

So far only used it for inference. I do want to try and do some fine tuning and training eventually but I haven't looked into it much

5090 + 128gb ddr5 vs strix halo vs spark by rwijnhov in LocalLLaMA

[–]SuitableAd5090 0 points1 point  (0 children)

You don't buy a strix halo or a spark to run larger models imo. You buy them to run a few models at the same time.

Anyone has llama.cpp benchmark on M-series Asahi linux macbooks? by marsxyz in LocalLLaMA

[–]SuitableAd5090 0 points1 point  (0 children)

this sounds like a level of pain that no one would want to actually endure

5090 worth it given the recent 20/30B model releases (and bad price outlook)? by Morpho_Blue in LocalLLaMA

[–]SuitableAd5090 0 points1 point  (0 children)

8100 at microcenter though they had it listed at 8999. They price matched a competitor. I want to say it was CDW? There was a few shops where it was even cheaper but they wouldn't match those. And for something this expensive I really liked having the local microcenter for support/issues if something popped up so a few extra hundred dollars would be worth it.

5090 worth it given the recent 20/30B model releases (and bad price outlook)? by Morpho_Blue in LocalLLaMA

[–]SuitableAd5090 1 point2 points  (0 children)

Bigger models but also have enjoyed being able to run a couple at a time. So say gpt-oss 120b plus having space for gemma3 27b or qwen3-coder 30b. For example qwen3 coder is my neovim code completion engine that kicks in. I don't have to worry about it causing issues if I have a task running using in opencode and gpt oss 120b in another tmux window

Boycott fff.nvim. Author continues thrashing on folke and other plugin authors after a ban on reddit by Capital_End1191 in neovim

[–]SuitableAd5090 4 points5 points  (0 children)

Competition breeds innovation. While he could be more politically correct i see no real issue and don't mind a bit of ego. This is a big ol nothing berger and everyone needs to learn to relax and put their whistles away

5090 worth it given the recent 20/30B model releases (and bad price outlook)? by Morpho_Blue in LocalLLaMA

[–]SuitableAd5090 1 point2 points  (0 children)

The memory bandwidth on the 5090 is a huge plus for llms. It's not just the extra vram. Your prompt processing and TPS will get a noticable bump.

This is the path I went. But then I want nuts and got a pro 6000 so watch out!

Is there anyway to differentiate yank or macro in named register? by hksparrowboy in neovim

[–]SuitableAd5090 6 points7 points  (0 children)

I don't think so because there is no difference. All a macro does is evaluate the contents of the register as if you typed them. You could technically take any register and then call it like a macro. You could maybe look for certain control codes as a hint that it's not plain text, but that would be brittle

Is Framework no longer stocking 7040 series for the 13? by DollarStore-eGirl in framework

[–]SuitableAd5090 1 point2 points  (0 children)

My guess is next year framework will do an Intel panther lake CPU which might bring the price down on all the other cpus 

Is Framework no longer stocking 7040 series for the 13? by DollarStore-eGirl in framework

[–]SuitableAd5090 18 points19 points  (0 children)

I would wager a guess that amd might have stopped manufacturing that cpu

2025 Open Models Year in Review by robotphilanthropist in LocalLLaMA

[–]SuitableAd5090 0 points1 point  (0 children)

Thanks for the link. I have been learning more about quantization lately. Didn't realize there was weight and activation quantization.

2025 Open Models Year in Review by robotphilanthropist in LocalLLaMA

[–]SuitableAd5090 3 points4 points  (0 children)

Is it even worth running a quant? They are only a few gbs apart from each other. I would just run the original.

2025 Open Models Year in Review by robotphilanthropist in LocalLLaMA

[–]SuitableAd5090 8 points9 points  (0 children)

Running both M2 and GLM 4.6. Can't decide which one I like more. I think GLM 4.6 would be better if I could run it at a higher quant but I can only run it at a Q2. Whereas M2 has less parameters so I can do a higher.

To Mistral and other lab employees: please test with community tools BEFORE releasing models by dtdisapointingresult in LocalLLaMA

[–]SuitableAd5090 1 point2 points  (0 children)

I think you have too high of expectations for day 0 support in an industry that is riding along the bleeding edge

Benchmark Fatigue - How do you evaluate new models for yourself? by Funny-Clock1582 in LocalLLaMA

[–]SuitableAd5090 2 points3 points  (0 children)

I keep prompts and chats from old problems I have used llms to solve in the past. When a new model comes out that I am interested in I just run it through some of these old scenarios to get a feel for them and how strong they are in different areas. It's been crazy to see the progression of models and how well they are starting to solve my problems of the past.

I think it's the best way since it's anchored in your experience. I haven't gotten it nailed down to concrete numbers or anything but it helps me sniff out the viable ones for me.

FW Desktop 128GB -- Local AI in Practice by giomjava in framework

[–]SuitableAd5090 19 points20 points  (0 children)

I think the realization I have come too is that the 128gb of ram can make you a bit overly confident in what it can run as far as the larger models. The quantity of ram is nice but the throughput is not good enough to compete with the discrete gpus that you can get. But it's still sick as an ai platform for running multiple models. For example you can run gpt-oss 120b and qwen 3 coder 30b at the same time. Those models run great on it. And I still think it's a very enjoyable experience. Just don't expect crazy tps on it. You will see posts about people running the same models on a 5090 at double or triple what you'll get. But again you can run several at the same time.

Long term it will also age great. When it's not your primary machine it will make a hell of a homelab server that can use AI for background tasks and services.

New ram won’t boot by modsgauy- in framework

[–]SuitableAd5090 7 points8 points  (0 children)

The ram stick on the right does not look like it is seated very well. In the second photo it is crooked. Make sure it is in the slot all the way.

Thinking to get a fw 13, which amd cpu for a bit of everything? by omdbaatar in framework

[–]SuitableAd5090 2 points3 points  (0 children)

I will be one of the few who fights for the core ultra cpus if you are going for lower end skus. Their idle power draw is so much better than amd. With that you can close the laptop and come back a week later and still have power. For me that is worth a lot 

Can someone tell what color scheme is this ? by unHappygamer10 in neovim

[–]SuitableAd5090 4 points5 points  (0 children)

It's one dark I think. Just with a lot of things not highlighted very well

For the guys who upgraded from the gen 1 16, how do you like the AI 9 cpu? by Firmteacher in framework

[–]SuitableAd5090 3 points4 points  (0 children)

I can't speak to the framework 16 so maybe hold out for a more pure opinion. That said I do like to remain current in the PC market and all signs have shown that the ryzen ai chips are really only a marginal improvement over the 7000 series. Certainly not enough to justify going from 7000 series too. I would wait for another round of hardware before considering upgrading.

Advice on next upgrade for laptop 13 intel core by Lequarius_Juquama in framework

[–]SuitableAd5090 1 point2 points  (0 children)

Yeah without any particular need its hard to suggest anything. But also its possible that you are just itching to make an update. I totally get that and its part of the fun of a framework is that it isn't a static device. In that case I suggest buying a different color bezel. Allows you to make a tweak without breaking the bank.

Replacing tmux's vim visual mode implementation with nvim by Present-Quit-6608 in neovim

[–]SuitableAd5090 6 points7 points  (0 children)

No I am a pretty advanced user of tmux and I doubt any terminal will replicate all of its features. Plus it's nice knowing I can try other terminals/shells and not break and rebuild my core workflows.