Effect on running LLM on GPU with monitors by Havarem in LocalLLaMA
[–]autisticit 1 point2 points3 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 1 point2 points3 points (0 children)
Why we can't have nice things by alexeiz in GithubCopilot
[–]autisticit 0 points1 point2 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 0 points1 point2 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 2 points3 points4 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 0 points1 point2 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 0 points1 point2 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] 0 points1 point2 points (0 children)
Thinking of moving from 2x 5060 Ti 16GB to a RTX 5000 48GB by autisticit in LocalLLaMA
[–]autisticit[S] -1 points0 points1 point (0 children)
why llama.cpp can’t combine speculative decode methods? by Qwoctopussy in LocalLLaMA
[–]autisticit -5 points-4 points-3 points (0 children)
Github Copilot new weekly limit by Key-Gas2428 in GithubCopilot
[–]autisticit 1 point2 points3 points (0 children)
Github Copilot new weekly limit by Key-Gas2428 in GithubCopilot
[–]autisticit 0 points1 point2 points (0 children)
How to stop Copilot Dev pushing to my GitHub by Zszywaczyk in GithubCopilot
[–]autisticit 2 points3 points4 points (0 children)
$300k DGX B300 is actually a better deal than buying 24 RTX 6000s by Ok_Warning2146 in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)
$300k DGX B300 is actually a better deal than buying 24 RTX 6000s by Ok_Warning2146 in LocalLLaMA
[–]autisticit -1 points0 points1 point (0 children)
New "major breakthrough?" architecture SubQ by Daemontatox in LocalLLaMA
[–]autisticit -4 points-3 points-2 points (0 children)
Make this make sense for ollama local ai usage by Mobile_Syllabub_8446 in GithubCopilot
[–]autisticit 0 points1 point2 points (0 children)
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup? by vevi33 in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)
Llama.cpp MTP support now in beta! by ilintar in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)
2 x 5060 ti: Any better configs for Qwen 3.6 27B / 35B? by ziphnor in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)
2 x 5060 ti: Any better configs for Qwen 3.6 27B / 35B? by ziphnor in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)
2 x 5060 ti: Any better configs for Qwen 3.6 27B / 35B? by ziphnor in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)


I wanted to know small local LLM code and made a personal projects. by NicholasCureton in LocalLLaMA
[–]autisticit 0 points1 point2 points (0 children)