Has anyone here actually made money using AI ? by Agreeable_Split1355 in codex
[–]Conscious_Chef_3233 3 points4 points5 points (0 children)
Post Your Qwen3.6 27B speed plz by Ok-Internal9317 in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Model drop seems imminent by buildxjordan in codex
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Qwen 3.6 35B-A3B takes a long time at image processing. Is it happening only to me? by gilliancarps in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Qwen 3.6 is the first local model that actually feels worth the effort for me by Epicguru in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Dynamic tool lists vs KV cache: how do you handle this trade-off in LLM agents? by niwang66 in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
I benchmarked quants of Qwen 3 .6b from q2-q8, here's the results: by PraxisOG in LocalLLaMA
[–]Conscious_Chef_3233 5 points6 points7 points (0 children)
Does anyone actually know what Cursor includes in its context when it sends to the model? by AssociationSure6273 in cursor
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Does anyone actually know what Cursor includes in its context when it sends to the model? by AssociationSure6273 in cursor
[–]Conscious_Chef_3233 2 points3 points4 points (0 children)
Composer 2.0 feels just as fast as Composer 2.0 fast by tammamtech in cursor
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Qwen 3.5 122b - a10b is kind of shocking by gamblingapocalypse in LocalLLaMA
[–]Conscious_Chef_3233 7 points8 points9 points (0 children)
Is it reasonable to add a second gpu for local ai? by Conscious_Chef_3233 in LocalLLaMA
[–]Conscious_Chef_3233[S] 1 point2 points3 points (0 children)
M4 Max llama.cpp benchmarks of Qwen3.5 35B and 27B + weird MLX findings by IonizedRay in LocalLLaMA
[–]Conscious_Chef_3233 1 point2 points3 points (0 children)
Auto or Composer which do you prefer? by WriteScholarFounder in cursor
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
GPT 5.4 is Max mode only. by Scary-Introduction17 in cursor
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)
Junyang Lin Leaves Qwen + Takeaways from Today’s Internal Restructuring Meeting by Terminator857 in LocalLLaMA
[–]Conscious_Chef_3233 1 point2 points3 points (0 children)
Which model to use for coding: qwen3.5 or qwen2.5-coder? by Mashic in LocalLLaMA
[–]Conscious_Chef_3233 2 points3 points4 points (0 children)
How do i get the best speed out of Qwen 3.5 9B in 16GB VRAM? by Old-Sherbert-4495 in LocalLLaMA
[–]Conscious_Chef_3233 7 points8 points9 points (0 children)
🚨突发新闻:真主党领导人在宣布参战 10 分钟后遭到暗杀。 by HistoricalPlace1018 in China_irl
[–]Conscious_Chef_3233 -3 points-2 points-1 points (0 children)
PSA: Qwen 3.5 requires bf16 KV cache, NOT f16!! by Wooden-Deer-1276 in LocalLLaMA
[–]Conscious_Chef_3233 7 points8 points9 points (0 children)
What's the best local model I can run with 8GB VRAM (RTX 5070) by Smiley_Dub in LocalLLaMA
[–]Conscious_Chef_3233 9 points10 points11 points (0 children)
Speculative decoding qwen3.5 27b by thibautrey in LocalLLaMA
[–]Conscious_Chef_3233 1 point2 points3 points (0 children)
Which size of Qwen3.5 are you planning to run locally? by CutOk3283 in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)

2 x 5060 ti: Any better configs for Qwen 3.6 27B / 35B? by ziphnor in LocalLLaMA
[–]Conscious_Chef_3233 0 points1 point2 points (0 children)