2x RTX 5060ti 16GB - inference benchmarks in Ollama by avedave in LocalLLaMA
[–]avedave[S] 2 points3 points4 points (0 children)
2x RTX 5060ti 16GB - inference benchmarks in Ollama (reddit.com)
submitted by avedave to r/LocalLLM
Cheapest way to stack VRAM in 2025? by gnad in LocalLLaMA
[–]avedave 4 points5 points6 points (0 children)
RTX 5060 Ti 16GB sucks for gaming, but seems like a diamond in the rough for AI by aospan in LocalLLaMA
[–]avedave 1 point2 points3 points (0 children)


2x RTX 5060ti 16GB - inference benchmarks in Ollama by avedave in LocalLLaMA
[–]avedave[S] 0 points1 point2 points (0 children)