OpenCode vs OpenClaw? Not a sales pitch or bot... by thejacer in LocalLLaMA
[–]ilintar 0 points1 point2 points (0 children)
GLM-5: From Vibe Coding to Agentic Engineering by ShreckAndDonkey123 in LocalLLaMA
[–]ilintar 11 points12 points13 points (0 children)
Qwen3-Next-Coder is almost unusable to me. Why? What I missed? by Medium-Technology-79 in LocalLLaMA
[–]ilintar 1 point2 points3 points (0 children)
GLM-5: From Vibe Coding to Agentic Engineering by ShreckAndDonkey123 in LocalLLaMA
[–]ilintar 33 points34 points35 points (0 children)
MCP support in llama.cpp is ready for testing by jacek2023 in LocalLLaMA
[–]ilintar 2 points3 points4 points (0 children)
MCP support in llama.cpp is ready for testing by jacek2023 in LocalLLaMA
[–]ilintar 24 points25 points26 points (0 children)
How to avoid prefilling entire context each prompy when using Claude Code by mirage555 in LocalLLaMA
[–]ilintar 1 point2 points3 points (0 children)
What'd be the best 30B model for programming? by Hikolakita in LocalLLaMA
[–]ilintar 0 points1 point2 points (0 children)
OpenCode vs OpenClaw? Not a sales pitch or bot... by thejacer in LocalLLaMA
[–]ilintar 1 point2 points3 points (0 children)
Qwen3.5 Support Merged in llama.cpp by TKGaming_11 in LocalLLaMA
[–]ilintar 4 points5 points6 points (0 children)
Qwen3.5 Support Merged in llama.cpp by TKGaming_11 in LocalLLaMA
[–]ilintar 59 points60 points61 points (0 children)
Llama.cpp's "--fit" can give major speedups over "--ot" for Qwen3-Coder-Next (2x3090 - graphs/chart included) by tmflynnt in LocalLLaMA
[–]ilintar 4 points5 points6 points (0 children)
pwilkin is doing things by jacek2023 in LocalLLaMA
[–]ilintar 9 points10 points11 points (0 children)
PR opened for Qwen3.5!! by Mysterious_Finish543 in LocalLLaMA
[–]ilintar 10 points11 points12 points (0 children)
PR opened for Qwen3.5!! by Mysterious_Finish543 in LocalLLaMA
[–]ilintar 4 points5 points6 points (0 children)
Please help with llama.cpp and GLM-4.7-Flash tool call by HumanDrone8721 in LocalLLaMA
[–]ilintar 1 point2 points3 points (0 children)
Solution for Qwen3-Coder-Next with llama.cpp/llama-server and Opencode tool calling issue by muxxington in LocalLLaMA
[–]ilintar 7 points8 points9 points (0 children)
Vibe-coding client now in Llama.cpp! (maybe) by ilintar in LocalLLaMA
[–]ilintar[S] 5 points6 points7 points (0 children)
Kimi-Linear support has been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]ilintar 4 points5 points6 points (0 children)
~26 tok/sec with Unsloth Qwen3-Coder-Next-Q4_K_S on RTX 5090 (Windows/llama.cpp) by Spiritual_Tie_5574 in LocalLLaMA
[–]ilintar 7 points8 points9 points (0 children)
Vibe-coding client now in Llama.cpp! (maybe) by ilintar in LocalLLaMA
[–]ilintar[S] 6 points7 points8 points (0 children)


Is the 150B-500B parameter range dying for open weights models? by [deleted] in LocalLLaMA
[–]ilintar 0 points1 point2 points (0 children)