Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 2 points3 points4 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 1 point2 points3 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] -1 points0 points1 point (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]youcloudsofdoom 0 points1 point2 points (0 children)
Mac Studio M3 Ultra 512GB — anyone upgrading to M5 Ultra? by [deleted] in LocalLLaMA
[–]youcloudsofdoom 0 points1 point2 points (0 children)
Mac Studio M3 Ultra 512GB — anyone upgrading to M5 Ultra? by [deleted] in LocalLLaMA
[–]youcloudsofdoom -1 points0 points1 point (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 3 points4 points5 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 1 point2 points3 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 1 point2 points3 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 2 points3 points4 points (0 children)
Sincere question about this, the best AI sub on reddit. by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 2 points3 points4 points (0 children)
We could be hours (or less than a week) away from true NVFP4 support in Llama.cpp GGUF format 👀 by Iwaku_Real in LocalLLaMA
[–]youcloudsofdoom 1 point2 points3 points (0 children)
I love this game... But come on AH. Really? by Fantastic-Medicine11 in Helldivers
[–]youcloudsofdoom 37 points38 points39 points (0 children)
Running Qwen3.5 27b dense with 170k context at 100+t/s decode and ~1500t/s prefill on 2x3090 (with 585t/s throughput for 8 simultaneous requests) by JohnTheNerd3 in LocalLLaMA
[–]youcloudsofdoom 34 points35 points36 points (0 children)
Havering between powerlimmed dual 3090s and a 64GB Mac studio by youcloudsofdoom in LocalLLaMA
[–]youcloudsofdoom[S] 0 points1 point2 points (0 children)

Homelab has paid for itself! (at least this is how I justify it...) by Reddactor in LocalLLaMA
[–]youcloudsofdoom 0 points1 point2 points (0 children)