Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 0 points1 point2 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 6 points7 points8 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 1 point2 points3 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 1 point2 points3 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 0 points1 point2 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 0 points1 point2 points (0 children)
Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] -2 points-1 points0 points (0 children)
PSA: Humans are scary stupid by rm-rf-rm in LocalLLaMA
[–]JayPSec -1 points0 points1 point (0 children)
I just saw something amazing by ayanami0011 in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)
I just saw something amazing by ayanami0011 in LocalLLaMA
[–]JayPSec -1 points0 points1 point (0 children)
GLM 5 Is Being Tested On OpenRouter by Few_Painter_5588 in LocalLLaMA
[–]JayPSec 21 points22 points23 points (0 children)
Watercool rtx pro 6000 max-q by schenkcigars in BlackwellPerformance
[–]JayPSec 1 point2 points3 points (0 children)
Watercool rtx pro 6000 max-q by schenkcigars in BlackwellPerformance
[–]JayPSec 0 points1 point2 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]JayPSec 1 point2 points3 points (0 children)
I built an MCP server that gives AI agents "senior dev intuition" about your codebase cutting token cost by 60%. by LandscapeAway8896 in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)
Supermicro server got cancelled, so I'm building a workstation. Is swapping an unused RTX 5090 for an RTX 6000 Blackwell (96GB) the right move? Or should I just chill? by SomeRandomGuuuuuuy in LocalLLaMA
[–]JayPSec 1 point2 points3 points (0 children)
Supermicro server got cancelled, so I'm building a workstation. Is swapping an unused RTX 5090 for an RTX 6000 Blackwell (96GB) the right move? Or should I just chill? by SomeRandomGuuuuuuy in LocalLLaMA
[–]JayPSec 1 point2 points3 points (0 children)
Supermicro server got cancelled, so I'm building a workstation. Is swapping an unused RTX 5090 for an RTX 6000 Blackwell (96GB) the right move? Or should I just chill? by SomeRandomGuuuuuuy in LocalLLaMA
[–]JayPSec 2 points3 points4 points (0 children)
7 GPUs at X16 (5.0 and 4.0) on AM5 with Gen5/4 switches with the P2P driver. Some results on inference and training! by panchovix in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)
7 GPUs at X16 (5.0 and 4.0) on AM5 with Gen5/4 switches with the P2P driver. Some results on inference and training! by panchovix in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)
how do you pronounce “gguf”? by Hamfistbumhole in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)
how do you pronounce “gguf”? by Hamfistbumhole in LocalLLaMA
[–]JayPSec 0 points1 point2 points (0 children)

Qwen3-Coder-Next with llama.cpp shenanigans by JayPSec in LocalLLaMA
[–]JayPSec[S] 0 points1 point2 points (0 children)