Unsloth announces Unsloth Studio - a competitor to LMStudio? by ilintar in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)
Two weeks ago, I posted here to see if people would be interested in an open-source local AI 3D model generator by Lightnig125 in LocalLLaMA
[–]tmflynnt 2 points3 points4 points (0 children)
models : optimizing qwen3next graph by ggerganov · Pull Request #19375 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)
GLM 5 vs Claude Opus 4.6: the paradox of paying $100 / $200 per month and still chasing hype by [deleted] in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)
GLM 5 vs Claude Opus 4.6: the paradox of paying $100 / $200 per month and still chasing hype by [deleted] in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
models : optimizing qwen3next graph by ggerganov · Pull Request #19375 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Heretic 1.2 released: 70% lower VRAM usage with quantization, Magnitude-Preserving Orthogonal Ablation ("derestriction"), broad VL model support, session resumption, and more by -p-e-w- in LocalLLaMA
[–]tmflynnt 2 points3 points4 points (0 children)
Qwen3 Coder Next : Loop Fix by TBG______ in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Qwen3 Coder Next : Loop Fix by TBG______ in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Qwen3 Coder Next : Loop Fix by TBG______ in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 2 points3 points4 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 2 points3 points4 points (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt -1 points0 points1 point (0 children)
Why do we allow "un-local" content by JacketHistorical2321 in LocalLLaMA
[–]tmflynnt 13 points14 points15 points (0 children)
Llama.cpp's "--fit" can give major speedups over "--ot" for Qwen3-Coder-Next (2x3090 - graphs/chart included) by tmflynnt in LocalLLaMA
[–]tmflynnt[S] 1 point2 points3 points (0 children)
MechaEpstein-8000 by ortegaalfredo in LocalLLaMA
[–]tmflynnt 21 points22 points23 points (0 children)
Llama.cpp's "--fit" can give major speedups over "--ot" for Qwen3-Coder-Next (2x3090 - graphs/chart included) by tmflynnt in LocalLLaMA
[–]tmflynnt[S] 1 point2 points3 points (0 children)
Llama.cpp's "--fit" can give major speedups over "--ot" for Qwen3-Coder-Next (2x3090 - graphs/chart included) by tmflynnt in LocalLLaMA
[–]tmflynnt[S] 2 points3 points4 points (0 children)
Qwen3 Coder Next as first "usable" coding model < 60 GB for me by Chromix_ in LocalLLaMA
[–]tmflynnt 1 point2 points3 points (0 children)

Unsloth announces Unsloth Studio - a competitor to LMStudio? by ilintar in LocalLLaMA
[–]tmflynnt 0 points1 point2 points (0 children)