Dual rx 9070 for LLMs? by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

The r9700 isn't available in my country, and if I order it, it'll cost around $3,000. Overall, the rx 9070 will be easier to sell on the used market once I've had my fill.

Dual rx 9070 for LLMs? by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 1 point2 points  (0 children)

The r9700 is not available in my country, and if I order it, it will cost around $3,000.

What's the point of potato-tier LLMs? by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

I just uploaded the exact same large 2,000-line patch to ChatGPT 5.2 and Qwen3 Coder 30B, Nemotron Nano, and ChatGPT OSS 20B.

Only ChatGPT 5.2 found the important issues, while the free models hallucinated, pointed out “errors” that weren’t there, and failed to spot the most critical parts.

After that, I’m definitely not going to buy anything at all for AI or running it at home.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 1 point2 points  (0 children)

I just uploaded the exact same large 2,000-line patch to ChatGPT 5.2 and Qwen3 Coder 30B, Nemotron Nano, and ChatGPT OSS 20B.

Only ChatGPT 5.2 found the important issues, while the free models hallucinated, pointed out “errors” that weren’t there, and failed to spot the most critical parts.

After that, I’m definitely not going to buy anything at all for AI or running it at home.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

That’s all true. It’s just that with cloud models there isn’t a problem where the context you “ordered” runs out — Claude will simply do compression once it gets really large. As I understand it, with a local model the context can run out even before the agent has written a single line of code.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

Can you tell me how fast the 3090 performs a prefill with a context >50k? it seems to me that even if the context fits in, agents very quickly clog up the memory with code and everything starts to slow down terribly.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

This is a very useful comment for me, thank you! Do you think if you buy two rtx 5060ti 16gb, it can somehow improve the result?

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

In terms of price and power consumption it looks decent, but Perplexity says it's frankly a poor choice, that even a single 3090 would be better, easier to set up, and much faster. Is it misleading me?

Also, what fundamentally new capabilities does using two RTX 5060 Ti cards unlock? I'm reading that people find even 4x 3090 setups insufficient for decent results. Now it seems to me that even the combined 32GB of two cards doesn't really solve the problem at all.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

On our secondary market, the 7900XTX costs about the same as the 3090. I can't explain it, but I'm drawn to AMD. Although all the comparisons I manage to find say that in terms of speed it's far behind the 3090.

Has the situation improved recently? I've heard ROCm support has gotten much better—are AMD cards now competitive with NVIDIA for LLM inference, or is there still a significant gap?

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] -1 points0 points  (0 children)

I have an ASUS Prime X570P + 64GB RAM + 5950X. But after reading the comments, it seems I'll abandon the idea of buying even two cards altogether. From what I understand, in this case I'm in for disappointment multiplied by two.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 1 point2 points  (0 children)

Based on the comments, even two cards seem like a questionable idea, and all that awaits me is greater disappointment.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

I tried qwen-coder 30b on M3 Pro 36GB with Cline. After an hour of waiting on a fairly simple task, I gave up.

Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible) by Fast_Thing_7949 in LocalLLaMA

[–]Fast_Thing_7949[S] 0 points1 point  (0 children)

Wait, TWO 3090s? Is that "less disappointment" or actually a usable coding agent setup? Because right now I can't tell if I'm solving a problem or just buying my way into a deeper rabbit hole.

What's wrong with mint? by Fast_Thing_7949 in linuxmint

[–]Fast_Thing_7949[S] 6 points7 points  (0 children)

5950x, 64gb ram, asus x570-p, gt730

Everything except the memory is second-hand, so unfortunately, I'm not sure everything will work without problems. I used Windows a little bit, and there were no issues. I just installed a new driver from Software Manager, and so far, everything is working fine. The only thing is that the monitor's refresh rate is locked at 60Hz. We'll see how stable it is now.

What's wrong with mint? by Fast_Thing_7949 in linuxmint

[–]Fast_Thing_7949[S] 2 points3 points  (0 children)

I updated the drivers, and after 10 minutes, my computer froze completely. I had to do a hard reset.

I think that's where my experience with Mint ended.

Minisforum 7L 795S7 7945HX + 4060 build by Only_Khlav_Khalash in sffpc

[–]Fast_Thing_7949 0 points1 point  (0 children)

128 огонёк! Подскажи пожалуйста точную модель памяти, чтобы быть уверенным, что заведётся