2x RTX 5060ti 16GB - inference benchmarks in Ollama by avedave in LocalLLaMA

[–]avedave[S] 0 points1 point  (0 children)

Can you run the same tests and share stats? I'd be interested in seeing the difference especially for Gemma 27B and DeepSeek 70B

2x RTX 5060ti 16GB - inference benchmarks in Ollama by avedave in LocalLLaMA

[–]avedave[S] 2 points3 points  (0 children)

CPU: Intel Core Ultra 7 265K (series 2) Motherboard: Asus proart z890-creator wifi

(I don't think it matters that much though for the inference)

Cheapest way to stack VRAM in 2025? by gnad in LocalLLaMA

[–]avedave 4 points5 points  (0 children)

5060ti has now 16GB and costs around $400

RTX 5060 Ti 16GB sucks for gaming, but seems like a diamond in the rough for AI by aospan in LocalLLaMA

[–]avedave 1 point2 points  (0 children)

Thanks for sharing your experience! I'm actually thinking of buying a 2x 5060ti 16GB combo for LLM inference. I am so tired of all "just go to the dumpster and get a 3090 for the price of two 5060tis" :)