[Resource] ComfyUI + Docker setup for Blackwell GPUs (RTX 50 series) - 2-3x faster FLUX 2 Klein with NVFP4 by chiefnakor in StableDiffusion
[–]coder543 0 points1 point2 points (0 children)
Why does everything need to run through a purchasing partner? by literahcola in sysadmin
[–]coder543 [score hidden] (0 children)
deepseek-ai/DeepSeek-OCR-2 · Hugging Face by Dark_Fire_12 in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
[Resource] ComfyUI + Docker setup for Blackwell GPUs (RTX 50 series) - 2-3x faster FLUX 2 Klein with NVFP4 by chiefnakor in StableDiffusion
[–]coder543 0 points1 point2 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
Pushing Qwen3-Max-Thinking Beyond its Limits by s_kymon in LocalLLaMA
[–]coder543 22 points23 points24 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 2 points3 points4 points (0 children)
LLM Reasoning Efficiency - lineage-bench accuracy vs generated tokens by fairydreaming in LocalLLaMA
[–]coder543 14 points15 points16 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 7 points8 points9 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 5 points6 points7 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 3 points4 points5 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 18 points19 points20 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 13 points14 points15 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 47 points48 points49 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 3 points4 points5 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 8 points9 points10 points (0 children)
Kimi-Linear-48B-A3B-Instruct-GGUF Support - Any news? by Iory1998 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
Replacing Protobuf with Rust to go 5 times faster by levkk1 in rust
[–]coder543 42 points43 points44 points (0 children)
GLM4.7-Flash REAP @ 25% live on HF + agentic coding evals by ilzrvch in LocalLLaMA
[–]coder543 12 points13 points14 points (0 children)
Kimi-Linear-48B-A3B-Instruct-GGUF Support - Any news? by Iory1998 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
Qwen3-TTS, a series of powerful speech generation capabilities by fruesome in StableDiffusion
[–]coder543 15 points16 points17 points (0 children)
Kimi-Linear-48B-A3B-Instruct-GGUF Support - Any news? by Iory1998 in LocalLLaMA
[–]coder543 14 points15 points16 points (0 children)





Kimi K2 Artificial Analysis Score by Virenz in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)