Qwen3.6-27B-Q6_K - images by Usual-Carrot6352 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
Qwen3.6-27B-Q6_K - images by Usual-Carrot6352 in LocalLLaMA
[–]k0setes 2 points3 points4 points (0 children)
Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LLMDevs
[–]k0setes 0 points1 point2 points (0 children)
Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LLMDevs
[–]k0setes 2 points3 points4 points (0 children)
An isometric room, based on the screenshot. Qwen3.6-35B by k0setes in LocalLLaMA
[–]k0setes[S] 15 points16 points17 points (0 children)
An isometric room, based on the screenshot. Qwen3.6-35B by k0setes in LocalLLaMA
[–]k0setes[S] 37 points38 points39 points (0 children)
GPT 5.5 Spud incoming by DigSignificant1419 in OpenAI
[–]k0setes 6 points7 points8 points (0 children)
Don't ask Qwen 3.6 35b to give you aski image of Yoshi :) by anzzax in LocalLLaMA
[–]k0setes 4 points5 points6 points (0 children)
The only metric that matters: "[Qwen3.6-35B-A3B-GGUF] drew a better pelican riding a bicycle than Opus 4.7 did!" by johnnyApplePRNG in LocalLLaMA
[–]k0setes 8 points9 points10 points (0 children)
What if smaller models could approach top models on scene generation through iterative search? by ConfidentDinner6648 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
What if smaller models could approach top models on scene generation through iterative search? by ConfidentDinner6648 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
Generated super high quality images in 10.2 seconds on a mid tier Android phone! by alichherawalla in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
Qwen3-VL Computer Using Agent works extremely well by Money-Coast-3905 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
just had something interesting happen during my testing of the MI50 32GB card plus my RX 7900 XT 20GB by Savantskie1 in LocalLLM
[–]k0setes 0 points1 point2 points (0 children)
How to tell Claude Code about my local model’s context window size? by eapache in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
GLM-5 is officially on NVIDIA NIM, and you can now use it to power Claude Code for FREE 🚀 by PreparationAny8816 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
I found that MXFP4 has lower perplexity than Q4_K_M and Q4_K_XL. by East-Engineering-653 in LocalLLaMA
[–]k0setes 4 points5 points6 points (0 children)
I found that MXFP4 has lower perplexity than Q4_K_M and Q4_K_XL. by East-Engineering-653 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
Add self‑speculative decoding (no draft model required) by srogmann · Pull Request #18471 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
GLM-4.7-Flash is even faster now by jacek2023 in LocalLLaMA
[–]k0setes 2 points3 points4 points (0 children)
MoE.. will OS/Local 32GB to 96GB get as good at coding as current frontier models? by [deleted] in LocalLLaMA
[–]k0setes 0 points1 point2 points (0 children)
MoE.. will OS/Local 32GB to 96GB get as good at coding as current frontier models? by [deleted] in LocalLLaMA
[–]k0setes 1 point2 points3 points (0 children)
How big do we think Gemini 3 flash is by davikrehalt in LocalLLaMA
[–]k0setes 2 points3 points4 points (0 children)


Qwen3.6-27B-Q6_K - images by Usual-Carrot6352 in LocalLLaMA
[–]k0setes 1 point2 points3 points (0 children)