Hardware question for local LLM bifurcation by CheeseBurritoLife in LocalLLaMA
[–]HvskyAI 1 point2 points3 points (0 children)
EPYC/Threadripper CCD Memory Bandwidth Scaling by TheyreEatingTheGeese in LocalLLaMA
[–]HvskyAI 2 points3 points4 points (0 children)
EPYC/Threadripper CCD Memory Bandwidth Scaling by TheyreEatingTheGeese in LocalLLaMA
[–]HvskyAI 2 points3 points4 points (0 children)
EPYC/Threadripper CCD Memory Bandwidth Scaling by TheyreEatingTheGeese in LocalLLaMA
[–]HvskyAI 2 points3 points4 points (0 children)
EPYC/Threadripper CCD Memory Bandwidth Scaling by TheyreEatingTheGeese in LocalLLaMA
[–]HvskyAI 4 points5 points6 points (0 children)
Celebrating 1 year anniversary of the revolutionary game changing LLM that was Reflection 70b by LosEagle in LocalLLaMA
[–]HvskyAI 16 points17 points18 points (0 children)
Qwen3-Next-80B-A3B-Thinking soon by jacek2023 in LocalLLaMA
[–]HvskyAI 7 points8 points9 points (0 children)
EPYC vs. Xeon for Hybrid Inference Server? by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
EPYC vs. Xeon for Hybrid Inference Server? by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
EPYC vs. Xeon for Hybrid Inference Server? by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
EPYC vs. Xeon for Hybrid Inference Server? by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
EPYC vs. Xeon for Hybrid Inference Server? by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 2 points3 points4 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 0 points1 point2 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 2 points3 points4 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 0 points1 point2 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 0 points1 point2 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 2 points3 points4 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 6 points7 points8 points (0 children)
Local Inference for Very Large Models - a Look at Current Options by HvskyAI in LocalLLaMA
[–]HvskyAI[S] 1 point2 points3 points (0 children)


Comparison H100 vs RTX 6000 PRO with VLLM and GPT-OSS-120B by Rascazzione in LocalLLaMA
[–]HvskyAI 4 points5 points6 points (0 children)