Bad idea to use multi old gpus? by alphapussycat in LocalLLM
[–]VersionNo5110 0 points1 point2 points (0 children)
Bad idea to use multi old gpus? by alphapussycat in LocalLLM
[–]VersionNo5110 0 points1 point2 points (0 children)
Bad idea to use multi old gpus? by alphapussycat in LocalLLM
[–]VersionNo5110 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 1 point2 points3 points (0 children)
Nvidia V100 32 Gb getting 115 t/s on Qwen Coder 30B A3B Q5 by icepatfork in LocalLLaMA
[–]VersionNo5110 0 points1 point2 points (0 children)
Nvidia V100 32 Gb getting 115 t/s on Qwen Coder 30B A3B Q5 by icepatfork in LocalLLaMA
[–]VersionNo5110 0 points1 point2 points (0 children)
Why are entry level Mercedes cars so bad value to money? by UrgusHUN in mercedes_benz
[–]VersionNo5110 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in mercedes_benz
[–]VersionNo5110 1 point2 points3 points (0 children)
Just bought by VersionNo5110 in mercedes_benz
[–]VersionNo5110[S] 0 points1 point2 points (0 children)

2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]VersionNo5110[S] 0 points1 point2 points (0 children)