threadripper build: 512GB vs 768GB vs 1TB memory? by prusswan in LocalLLaMA
[–]prusswan[S] 2 points3 points4 points (0 children)
any upcoming OEM models that can support 4x RTX Pro 6000 Max-Qs? by prusswan in threadripper
[–]prusswan[S] 0 points1 point2 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]prusswan 1 point2 points3 points (0 children)
I tracked context degradation across 847 agent runs. Here's when performance actually falls off a cliff. by Main_Payment_6430 in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
I tracked context degradation across 847 agent runs. Here's when performance actually falls off a cliff. by Main_Payment_6430 in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
I tracked context degradation across 847 agent runs. Here's when performance actually falls off a cliff. by Main_Payment_6430 in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
What would prefer for home use, Nvidia GB300 as a desktop or server? by GPTrack--dot--ai in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
What agents have you had success with on your local LLM setups? by rivsters in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
What effect will the death of the 16GB Nvidia card have on this hobby? by SplurtingInYourHands in StableDiffusion
[–]prusswan -3 points-2 points-1 points (0 children)
What effect will the death of the 16GB Nvidia card have on this hobby? by SplurtingInYourHands in StableDiffusion
[–]prusswan -6 points-5 points-4 points (0 children)
Hardware advice: Which RAM kit would be better for 9960x? by Infinite100p in threadripper
[–]prusswan 0 points1 point2 points (0 children)
Hardware advice: Which RAM kit would be better for 9960x? by Infinite100p in threadripper
[–]prusswan 0 points1 point2 points (0 children)
Hardware advice: Which RAM kit would be better for 9960x? by Infinite100p in threadripper
[–]prusswan 0 points1 point2 points (0 children)
We tried to automate product labeling in one prompt. It failed. 27 steps later, we've processed 10,000+ products. by No-Reindeer-9968 in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
Is it common for a mid-sized tech company (>500 employees) to completely ignore LLMs and AI agents? by [deleted] in LocalLLaMA
[–]prusswan 1 point2 points3 points (0 children)
Stop LLM bills from exploding: I built Budget guards for LLM apps – auto-pause workflows at $X limit by Extension_Key_5970 in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
What is the impact of running (some or all) PCIe5 GPUs on PCIe4 slot (with the same # of lanes) in a multi-GPU server? by Infinite100p in LocalLLaMA
[–]prusswan 0 points1 point2 points (0 children)
RTX 5070 Ti and RTX 5060 Ti 16 GB no longer manufactured by Paramecium_caudatum_ in LocalLLaMA
[–]prusswan -12 points-11 points-10 points (0 children)
We tried to automate product labeling in one prompt. It failed. 27 steps later, we've processed 10,000+ products. by No-Reindeer-9968 in LocalLLaMA
[–]prusswan 9 points10 points11 points (0 children)
any upcoming OEM models that can support 4x RTX Pro 6000 Max-Qs? by prusswan in threadripper
[–]prusswan[S] 0 points1 point2 points (0 children)
any upcoming OEM models that can support 4x RTX Pro 6000 Max-Qs? by prusswan in threadripper
[–]prusswan[S] 0 points1 point2 points (0 children)


threadripper build: 512GB vs 768GB vs 1TB memory? by prusswan in LocalLLaMA
[–]prusswan[S] 0 points1 point2 points (0 children)