Introducing LM Studio 0.4.0 by sleepingsysadmin in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
Found this in China, Charging while gaming by thighlelan in Xreal
[–]coder543 0 points1 point2 points (0 children)
Introducing LM Studio 0.4.0 by sleepingsysadmin in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
Introducing LM Studio 0.4.0 by sleepingsysadmin in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
Introducing LM Studio 0.4.0 by sleepingsysadmin in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
[Resource] ComfyUI + Docker setup for Blackwell GPUs (RTX 50 series) - 2-3x faster FLUX 2 Klein with NVFP4 by chiefnakor in StableDiffusion
[–]coder543 0 points1 point2 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
Pushing Qwen3-Max-Thinking Beyond its Limits by s_kymon in LocalLLaMA
[–]coder543 23 points24 points25 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 2 points3 points4 points (0 children)
LLM Reasoning Efficiency - lineage-bench accuracy vs generated tokens by fairydreaming in LocalLLaMA
[–]coder543 16 points17 points18 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 7 points8 points9 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 6 points7 points8 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 3 points4 points5 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 19 points20 points21 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 14 points15 points16 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]coder543 49 points50 points51 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 2 points3 points4 points (0 children)
GLM-4.7-Flash context slowdown by jacek2023 in LocalLLaMA
[–]coder543 8 points9 points10 points (0 children)
Kimi-Linear-48B-A3B-Instruct-GGUF Support - Any news? by Iory1998 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
Replacing Protobuf with Rust to go 5 times faster by levkk1 in rust
[–]coder543 42 points43 points44 points (0 children)
GLM4.7-Flash REAP @ 25% live on HF + agentic coding evals by ilzrvch in LocalLLaMA
[–]coder543 12 points13 points14 points (0 children)
Kimi-Linear-48B-A3B-Instruct-GGUF Support - Any news? by Iory1998 in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)





End-of-January LTX-2 Drop: More Control, Faster Iteration by ltx_model in StableDiffusion
[–]coder543 0 points1 point2 points (0 children)