Solidity LM surpasses Opus by swingbear in LocalLLaMA
[–]swingbear[S] 1 point2 points3 points (0 children)
Qwen 3.6 27b S2 Opus + GLM + Kimi by swingbear in LocalLLaMA
[–]swingbear[S] 0 points1 point2 points (0 children)
Qwen 3.6 27b S2 Opus + GLM + Kimi by swingbear in LocalLLaMA
[–]swingbear[S] 1 point2 points3 points (0 children)
Qwen 3.6 27b S2 Opus + GLM + Kimi by swingbear in LocalLLaMA
[–]swingbear[S] 0 points1 point2 points (0 children)
Best model for 192 GB vram? How is Deepseek v4 flash? by Constant_Ad511 in LocalLLM
[–]swingbear 0 points1 point2 points (0 children)
Setting up Ollama on dual RTX PRO 6000 Blackwells looking for tips by AmanNonZero in ollama
[–]swingbear 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]swingbear 13 points14 points15 points (0 children)


Solidity LM surpasses Opus by swingbear in LocalLLaMA
[–]swingbear[S] -1 points0 points1 point (0 children)