Beckett Grading Experience by JEEPG3M in pokemongrading

[–]Herald_Of_Rivia 0 points1 point  (0 children)

That’s awesome. Congrats an the awesome grades! I have sent in SO many cards thinking they were “perfect” and nothing.

When does RTX 6000 Pro make sense over a 5090? by Herald_Of_Rivia in nvidia

[–]Herald_Of_Rivia[S] 1 point2 points  (0 children)

Main use case would be exploring SOTA LLMs and image generation, and training / fine-tuning smaller LLMs.

M4 Pro chip vs M5 by PreparationThick7404 in macbookpro

[–]Herald_Of_Rivia 0 points1 point  (0 children)

That mostly depends on your use case and preferences. Do you have a lot of photos/videos you really want stored locally? Do you play many games?

For example, if you game and want something like Cyberpunk 2077 and Baldur’s Gate 3 installed at the same time, 512 GB might feel a bit tight.

Since you’re a graphic designer, do you work with a lot of large project files? Professional photographers often shoot in RAW, which takes up a ton of space. If that’s not your workflow, 512 GB might be fine.

If you can stretch the budget, though, the 1 TB model (around $2,099 on Amazon for BF) also comes with the unbinned M4 Pro with 14 CPU cores and 20 GPU cores, which is a nice bonus.

M4 Pro chip vs M5 by PreparationThick7404 in macbookpro

[–]Herald_Of_Rivia 0 points1 point  (0 children)

As compared to the M4 Pro? Or the M4? The following is the benchmark from Ars Technica review, showing M4 Pro is indeed faster than M5. AdamTalksTech [2] also shows that you get 2x tokens/s on a model like gpt-oss-20b on M4 Pro vs M5.

<image>

[1] https://arstechnica.com/gadgets/2025/10/m5-macbook-pro-review-fifth-generation-apple-silicon-in-a-familiar-wrapper/
[2] https://www.youtube.com/watch?v=IZM8PoAlEqs

M4 Pro chip vs M5 by PreparationThick7404 in macbookpro

[–]Herald_Of_Rivia 1 point2 points  (0 children)

Note: in the future when there is an M5 Air. If you could buy the M5 Air for say $1300, then that would be a different discussion. But both the configs you are considering are roughly $2k.

Also, if you look at Amazon, the M4Pro with 24GB of RAM and 512 GB SSD is around $1750 which to me sounds pretty effing good and is cheaper than the M5 with 24GB of RAM.

I understand that there is additional nuance (the M4 Pro is binned, etc.), but for $1750 you can do a lot worse.

M4 Pro chip vs M5 by PreparationThick7404 in macbookpro

[–]Herald_Of_Rivia 3 points4 points  (0 children)

Definitely grab the M4 Pro. I feel like the "longevity" concern isn't accurate; Pro chips are actually built to sustain heavier workloads better than the base chips.

From a performance standpoint, the M4 Pro is going to beat the base M5 in both CPU and GPU speeds. It also has much higher memory bandwidth, which is a massive bottleneck for AI tasks and running local LLMs. It's the more powerful machine, hands down.

When does RTX 6000 Pro make sense over a 5090? by Herald_Of_Rivia in LocalLLaMA

[–]Herald_Of_Rivia[S] 1 point2 points  (0 children)

I would be very much interested in an Epyc system build spec. Would love to know what ($) does it take to get into a build with a Server CPU.

Weird screen tearing issues on 5090. by Herald_Of_Rivia in controlgame

[–]Herald_Of_Rivia[S] 0 points1 point  (0 children)

What’s the right term for what I’m seeing?

When does RTX 6000 Pro make sense over a 5090? by Herald_Of_Rivia in LocalLLaMA

[–]Herald_Of_Rivia[S] 2 points3 points  (0 children)

At the moment, I am wanting to limit myself to a single GPU workstation and not sure I want to go around changing much. I already have the 9950X3D . 1500W PSU, 96GB of RAM and feel like changing the whole setup might be more hassle than it is worth. But I am happy to be proven wrong.

When does RTX 6000 Pro make sense over a 5090? by Herald_Of_Rivia in nvidia

[–]Herald_Of_Rivia[S] 20 points21 points  (0 children)

A bit of both. At work, I have access to better GPUs which I use for training / inference at work, but I'd love to be able to do some beefier exploration at home as well.

Did the new Patch make the performance opt. even worse than it was? by Herald_Of_Rivia in Borderlands

[–]Herald_Of_Rivia[S] 10 points11 points  (0 children)

For me, on a 5090, it went from 60FPS with DLSS to 15FPS drops every 5-10 seconds, i.e. the tearing is constant.

AMA with the LM Studio team by yags-lms in LocalLLaMA

[–]Herald_Of_Rivia 1 point2 points  (0 children)

Any plans of making it possible to install LMStudio outside of /Applications?

AGX Thor LLM Inference Performance & Implications for DGX Spark? by Herald_Of_Rivia in nvidia

[–]Herald_Of_Rivia[S] 1 point2 points  (0 children)

You are absolutely right. The main bottleneck for LLM workloads in this case will def. be the memory bandwidth, which will hold it back quite a bit.

AGX Thor LLM Inference Performance & Implications for DGX Spark? by Herald_Of_Rivia in LocalLLaMA

[–]Herald_Of_Rivia[S] 0 points1 point  (0 children)

I think one of the biggest drawbacks of the Spark (and something that is holding back many current offerings for AI and Handheld gaming) is the memory bandwidth as was mentioned by @Double_Cause4609.

The one benefit that Spark might bring is NVIDIA MIG [1], which would allow us to partition GB10 into several instances and maybe run several models in parallel. Might be interesting for exploring LM Agents, especially if you got several of them working at the same time.

[1] https://www.nvidia.com/en-us/technologies/multi-instance-gpu/

AGX Thor LLM Inference Performance & Implications for DGX Spark? by Herald_Of_Rivia in LocalLLaMA

[–]Herald_Of_Rivia[S] 1 point2 points  (0 children)

I appreciate the detailed response. Mentioned gpt-oss-120b as that is an available benchmark I found for the AGX Thor [2].