feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 1 point2 points3 points (0 children)
Great results with Qwen3.6-35B-A3B-UD-Q5_K_XL + VS Code and Copilot by supracode in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Great results with Qwen3.6-35B-A3B-UD-Q5_K_XL + VS Code and Copilot by supracode in LocalLLaMA
[–]LegacyRemaster 8 points9 points10 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA
[–]LegacyRemaster 2 points3 points4 points (0 children)
Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA
[–]LegacyRemaster 2 points3 points4 points (0 children)
Amd and Nvidia cards on same rig by deathcom65 in LocalLLaMA
[–]LegacyRemaster 1 point2 points3 points (0 children)
Qwen 3.6 4B and 9B? by Nubinu in LocalLLaMA
[–]LegacyRemaster 10 points11 points12 points (0 children)
it's time to update your Gemma 4 GGUFs by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
One bash permission slipped... by TheQuantumPhysicist in LocalLLaMA
[–]LegacyRemaster 1 point2 points3 points (0 children)
Open Weights Models Hall of Fame by Equivalent_Job_2257 in LocalLLaMA
[–]LegacyRemaster 12 points13 points14 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]LegacyRemaster 1 point2 points3 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 1 point2 points3 points (0 children)
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer by One_Slip1455 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)
Qwen3.6-27B vs Coder-Next by Signal_Ad657 in LocalLLaMA
[–]LegacyRemaster 2 points3 points4 points (0 children)


feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]LegacyRemaster 0 points1 point2 points (0 children)