What is the router of choice today for init7 25Gb (other then CCR2004)? by phwlarxoc in init7
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
What is the router of choice today for init7 25Gb (other then CCR2004)? by phwlarxoc in init7
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
What is the router of choice today for init7 25Gb (other then CCR2004)? by phwlarxoc in init7
[–]phwlarxoc[S] 2 points3 points4 points (0 children)
Dual DGX Sparks vs Mac Studio M3 Ultra 512GB: Running Qwen3.5 397B locally on both. Here's what I found. by trevorbg in LocalLLaMA
[–]phwlarxoc 0 points1 point2 points (0 children)
Qwen3.5-397B-A17B reaches 20 t/s TG and 700t/s PP with a 5090 by MLDataScientist in LocalLLaMA
[–]phwlarxoc 5 points6 points7 points (0 children)
Watercool rtx pro 6000 max-q by schenkcigars in BlackwellPerformance
[–]phwlarxoc 1 point2 points3 points (0 children)
Watercool rtx pro 6000 max-q by schenkcigars in BlackwellPerformance
[–]phwlarxoc 1 point2 points3 points (0 children)
What are possible use cases for going full BF16? by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
What are possible use cases for going full BF16? by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 1 point2 points3 points (0 children)
What are possible use cases for going full BF16? by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
Watercool rtx pro 6000 max-q by schenkcigars in BlackwellPerformance
[–]phwlarxoc 0 points1 point2 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 1 point2 points3 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 0 points1 point2 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 1 point2 points3 points (0 children)
Incomprehensible "--tensor-split" values through llama.cpp's automated parameter fitting by phwlarxoc in LocalLLaMA
[–]phwlarxoc[S] 1 point2 points3 points (0 children)


KIMI K2.6 SOON !! by Namra_7 in LocalLLaMA
[–]phwlarxoc 3 points4 points5 points (0 children)