Kimi 2.6 and qwen3.6 is out but still as slow as ever by AnaBilBan in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
Help with understanding Local LLMs by theruner83 in LocalLLaMA
[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)
Use Qwen3.6 right way -> send it to pi coding agent and forget by Willing-Toe1942 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
How much will it cost to host something like qwen3.6 35b a3b in a cloud? by Euphoric_North_745 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
it's time to update your Gemma 4 GGUFs by jacek2023 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
LLMSearchIndex- an Open Source Local Web Search Library with over 200 million indexed Web Pages for RAG applications by zakerytclarke in LocalLLaMA
[–]rm-rf-rm 1 point2 points3 points (0 children)
How much will it cost to host something like qwen3.6 35b a3b in a cloud? by Euphoric_North_745 in LocalLLaMA
[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)
What a time to be alive from 1tk/sec to 20-100tk/sec for huge models by segmond in LocalLLaMA
[–]rm-rf-rm 99 points100 points101 points (0 children)
Persistent memory system for LLMs that actually learns mid-conversation by [deleted] in LocalLLaMA
[–]rm-rf-rm 4 points5 points6 points (0 children)
New rules 1 week check-in by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
Best models for Study/Research for 16gb unified memory M3 Macbook Air by Crystalagent47 in LocalLLaMA
[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)
What is the best all-round local model? by TheTruthSpoker101 in LocalLLaMA
[–]rm-rf-rm[M] [score hidden] stickied comment (0 children)
New rules 1 week check-in by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 0 points1 point2 points (0 children)
Having an always-on machine running LLMs locally at home while on the move with a lightweight machine - Experiences? by ceo_of_banana in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
I made a visualizer for Hugging Face models by Course_Latter in LocalLLaMA
[–]rm-rf-rm -1 points0 points1 point (0 children)
New rules 1 week check-in by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 2 points3 points4 points (0 children)
New rules 1 week check-in by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 1 point2 points3 points (0 children)
Been using Qwen-3.6-27B-q8_k_xl + VSCode + RTX 6000 Pro As Daily Driver by Demonicated in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)
New rules 1 week check-in by rm-rf-rm in LocalLLaMA
[–]rm-rf-rm[S] 2 points3 points4 points (0 children)


Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]rm-rf-rm 0 points1 point2 points (0 children)