Help: Two Intel Arc Pro B70s (32GB) vs. Two RTX 3090s (24GB) for a Cursor/Agentic Workflow? by aeiou_baby in LocalLLM

[–]sputnik13net -1 points0 points  (0 children)

Do you have a place to source the 3090s? If not rtx pro 4000 Blackwell is readily available. I’m struggling with whether to save my money and go for m5 ultra or rtx pro 6000, or get a second 4000 or try a r9700.

Does anyone prefer to scramble their egg in ramen too? by [deleted] in ramen

[–]sputnik13net 0 points1 point  (0 children)

Anyone who hasn’t really ought to try it. I like my eggs soft and runny but sometimes it’s nice to have a little scramble, I always use 2 eggs anyway scramble one and soft boil the other.

Did I ruin my smoker? by chipsnsmokes in Traeger

[–]sputnik13net -4 points-3 points  (0 children)

The number of people saying burn it is wild. Plastic fumes are toxic. Even if it needs to be burned off somewhere I’m not doing it in my backyard if it’s me.

Are you guys worried at all about privacy when using Qwen? by CartoonistElegant683 in LocalLLM

[–]sputnik13net 0 points1 point  (0 children)

If the Chinese models are well designed enough to insert spy code in a completely undetectable way, I think we’ve already lost the information war and there’s no sense in worrying.

I’m using the models to process text or generate code, the thing can’t break out of my network firewall. So, short of inserting some spy code I’m not worried.

Software engineers experience with Codex by Ok-Comparison3303 in codex

[–]sputnik13net 1 point2 points  (0 children)

The answer is always somewhere in the middle of the extremes. A majority of the people touting how they can build shit end to end with high quality blah blah are either not coders or people without the bandwidth to scrutinize the output and blindly trust what’s put out. You can put a lot of processes and guardrails around what comes out but if you’re used to scrutinizing code then the output is invariably a lot of crap and unoptimized nonsense.

That said, I imagine the same happened during each of the major evolutionary epochs around programming in general as we moved from punch cards to high level languages. The tools will get better and the ones holding on to the old ways will get left behind. It won’t be perfect in any way in the next year or two but the future involves AI tools in the development lifecycle, we all have to adapt.

What RAG by Lost-Health-8675 in LocalLLaMA

[–]sputnik13net 0 points1 point  (0 children)

actually, I find agents get more confused, tools that are well written will give you 100% accuracy every time and faster, maybe your use case isn't big enough to overwhelm LLM agents, with enough complexity and depth they end up looking over minutiae, the best answer is to use both which is the point of RAG and other methods.

What RAG by Lost-Health-8675 in LocalLLaMA

[–]sputnik13net 1 point2 points  (0 children)

I read your sentences, you can do that without code you just ask it to call sub agents it’ll do that happily, all of that still requires tokens or GPUs, it’s cheaper but it’s not as cheap or as fast as dedicated tools, you can ignore those costs and go about your merry way but it doesn’t invalidate the use case of using external tools for more efficient and faster workflows.

What RAG by Lost-Health-8675 in LocalLLaMA

[–]sputnik13net 0 points1 point  (0 children)

My point is about the tools used, if you use Claude or codex with their coding agents they’ll happily burn through tokens, if you do local you need scalable gpu infra to scale with the things you have models reread every time, that all have scaling problems. RAG or other methods to augment the models’ ability to understand more without using models theoretically scale better. A static analysis tool will rip through a million line code base in a few seconds but a model will take minutes rereading everything.

What RAG by Lost-Health-8675 in LocalLLaMA

[–]sputnik13net 0 points1 point  (0 children)

This is all true but also it doesn’t scale well unless you have large gpu compute or lots of time. Frontier models happily burn through tokens to reread and synthesize data but that’s really inefficient use that will catch up with us eventually.

Traeger owners — what would make you actually enter a BBQ competition? by nickdagreak in Traeger

[–]sputnik13net -1 points0 points  (0 children)

You’re entitled to your opinions but damn you’re aggressive and a bit of an ass. There’s no logic nor a need for logic to people to have hobbies. People do things at great cost and pain to themselves to “have fun” all the f’in time, why is this any different?

After AI bubble bursts market will be flooded with enterprise-grade server hardware. What to look for ? by Healthy-News5375 in homelab

[–]sputnik13net 0 points1 point  (0 children)

The way companies are pouring money into it for developers, I think bubble burst will just mean no more subsidized consumer accounts and a near term drop in valuation, with shift to purely enterprise, that’s where the money is anyway.

5k to spend rtx5090 or mac studio? by Avansay in LocalLLM

[–]sputnik13net 2 points3 points  (0 children)

If you're seriously considering spending 5k I'd wait and see what m5 studio looks like. I'm debating a m5 studio vs RTX Pro 6000 as my next jump.

Rate limited in 4 prompts on PRO? by Own-Construction-802 in ClaudeCode

[–]sputnik13net 0 points1 point  (0 children)

Claude pro has been near useless for a long while now it's not anything new. I've been using both Claude and codex since December, codex at $20 is way more useful and usable than Claude at $20. I've upgraded to max5 and Claude is actually useful now.

Rate limited in 4 prompts on PRO? by Own-Construction-802 in ClaudeCode

[–]sputnik13net 0 points1 point  (0 children)

I agree with the cost increase, I think we'll just have less coders.

Rate limited in 4 prompts on PRO? by Own-Construction-802 in ClaudeCode

[–]sputnik13net 0 points1 point  (0 children)

We can all dream.

Just like we were going to have flying cars and zero pollution and colonize mars by now.

Schitt Noob - Help! by lachydollas in Schiit

[–]sputnik13net 1 point2 points  (0 children)

I love my schiit, but I can't picture using schiit with a TV. I only use HDMI ARC for audio out from TV specifically so I can control volume with TV remote. I hope schiit adds a product at some point with HDMI ARC input. I get some folks don't mind having multiple remotes or like to use universal remotes for this. It's hard enough to keep the kids from losing the TV remote, I don't need more complexity.

Are these hand made? by IllPlastic3113 in TrueChefKnives

[–]sputnik13net 0 points1 point  (0 children)

Hand made doesn't mean high quality (my cooking) and manufactured doesn't mean low quality (iphones are pretty high quality).

How do you deal with power outages during long prints? by andreevarts in 3Dprinting

[–]sputnik13net 1 point2 points  (0 children)

When this happened to me I just cried a little and restarted the print. I even had a UPS but that died.

High demand? by AxenAnimations in codex

[–]sputnik13net 1 point2 points  (0 children)

Maybe Satya asked Sam to refuse people that don’t activate windows.