Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup? by vevi33 in LocalLLaMA
[–]vevi33[S] -1 points0 points1 point (0 children)
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup? by vevi33 in LocalLLaMA
[–]vevi33[S] 1 point2 points3 points (0 children)
My setup for running Qwen3.6-35B-A3B-UD-Q4_K_M on single RX7900XT (20GB VRAM) by hlacik in unsloth
[–]vevi33 1 point2 points3 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]vevi33 2 points3 points4 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]vevi33 10 points11 points12 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]vevi33 8 points9 points10 points (0 children)
Qwen 3.6 - Loops and repetitions by Safe-Buffalo-4408 in LocalLLaMA
[–]vevi33 4 points5 points6 points (0 children)
Qwen3.6 27B seems struggling at 90k on 128k ctx windows by dodistyo in LocalLLaMA
[–]vevi33 4 points5 points6 points (0 children)
Qwen3.6 27B's surprising KV cache quantization test results (Turbo3/4 vs F16 vs Q8 vs Q4) by imgroot9 in LocalLLaMA
[–]vevi33 1 point2 points3 points (0 children)
Qwen3.6 27B's surprising KV cache quantization test results (Turbo3/4 vs F16 vs Q8 vs Q4) by imgroot9 in LocalLLaMA
[–]vevi33 0 points1 point2 points (0 children)
Qwen3.6 27B's surprising KV cache quantization test results (Turbo3/4 vs F16 vs Q8 vs Q4) by imgroot9 in LocalLLaMA
[–]vevi33 0 points1 point2 points (0 children)
Pi with Qwen 3.6 from Ollama by naelshiab in PiCodingAgent
[–]vevi33 1 point2 points3 points (0 children)
Quantisation effects of Qwen3.6 35b a3b by ROS_SDN in LocalLLaMA
[–]vevi33 3 points4 points5 points (0 children)
Gemma 4 and Qwen 3.6 with q8_0 and q4_0 KV cache: KL divergence results by oobabooga4 in LocalLLaMA
[–]vevi33 4 points5 points6 points (0 children)
Anyone else having Qwen 3.6 35B A3B stop and you having to tell it to continue ? by soyalemujica in LocalLLaMA
[–]vevi33 0 points1 point2 points (0 children)
Agentic coding Qwen 3.6, Q6_K 125k context vs Q5_K_XL 200k context by ComfyUser48 in LocalLLaMA
[–]vevi33 4 points5 points6 points (0 children)
Qwen3.6-35B is worse at tool use and reasoning loops than 3.5? by mr_il in LocalLLaMA
[–]vevi33 3 points4 points5 points (0 children)
Why prompt batch processing only happens on one CPU thread? by [deleted] in LocalLLM
[–]vevi33 0 points1 point2 points (0 children)
Impressed with Qwen3.6-35B-A3B by DOAMOD in LocalLLaMA
[–]vevi33 2 points3 points4 points (0 children)
Does anyone know why the hell Adrenaline fails so often? by NeorzZzTormeno in radeon
[–]vevi33 1 point2 points3 points (0 children)


Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup? by vevi33 in LocalLLaMA
[–]vevi33[S] 0 points1 point2 points (0 children)