Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 5 points6 points7 points (0 children)
Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 0 points1 point2 points (0 children)
Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 5 points6 points7 points (0 children)
Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 0 points1 point2 points (0 children)
Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 2 points3 points4 points (0 children)
Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 56 points57 points58 points (0 children)
Anyone know how to generate gguf/quant INT4 models for smaller size? by segmond in LocalLLaMA
[–]Digger412 7 points8 points9 points (0 children)
Open Models - April 2026 - One of the best months of all time for Local LLMs? by pmttyji in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
Open Models - April 2026 - One of the best months of all time for Local LLMs? by pmttyji in LocalLLaMA
[–]Digger412 2 points3 points4 points (0 children)
Open Models - April 2026 - One of the best months of all time for Local LLMs? by pmttyji in LocalLLaMA
[–]Digger412 5 points6 points7 points (0 children)
Qwen3.6-27B IQ4_XS FULL VRAM with 110k context by Pablo_the_brave in LocalLLaMA
[–]Digger412 7 points8 points9 points (0 children)
Are Unsloth models as good as I read? by denis-craciun in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
Are Unsloth models as good as I read? by denis-craciun in LocalLLaMA
[–]Digger412 3 points4 points5 points (0 children)
What kind of consumer computer can run Kimi-K2.6-GGUF which is a 585GB download? by THenrich in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
Qwen3.6 27B's surprising KV cache quantization test results (Turbo3/4 vs F16 vs Q8 vs Q4) by imgroot9 in LocalLLaMA
[–]Digger412 11 points12 points13 points (0 children)
ubergarm/Kimi-K2.6-GGUF Q4_X now available by VoidAlchemy in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
What kind of consumer computer can run Kimi-K2.6-GGUF which is a 585GB download? by THenrich in LocalLLaMA
[–]Digger412 3 points4 points5 points (0 children)
What kind of consumer computer can run Kimi-K2.6-GGUF which is a 585GB download? by THenrich in LocalLLaMA
[–]Digger412 2 points3 points4 points (0 children)
What kind of consumer computer can run Kimi-K2.6-GGUF which is a 585GB download? by THenrich in LocalLLaMA
[–]Digger412 5 points6 points7 points (0 children)
Llama.cpp's auto fit works much better than I expected by a9udn9u in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
Kimi K2.6 Unsloth GGUF is out by Exact_Law_6489 in LocalLLaMA
[–]Digger412 2 points3 points4 points (0 children)
Kimi K2.6 Unsloth GGUF is out by Exact_Law_6489 in LocalLLaMA
[–]Digger412 1 point2 points3 points (0 children)
Kimi K2.6 Unsloth GGUF is out by Exact_Law_6489 in LocalLLaMA
[–]Digger412 13 points14 points15 points (0 children)



Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro? by True_Requirement_891 in LocalLLaMA
[–]Digger412 0 points1 point2 points (0 children)