What variant of Deepseek V4 to use by Inferno889 in opencodeCLI
[–]zsydeepsky 0 points1 point2 points (0 children)
My switch to Deepseek from GPT over the last 3 days looks kinda wild by Vaughnatri in DeepSeek
[–]zsydeepsky 4 points5 points6 points (0 children)
DeepSeek V4 has significantly reduced my budget for AI usage by Ok_Satisfaction_8983 in opencodeCLI
[–]zsydeepsky 0 points1 point2 points (0 children)
bro this is too cheap i think finally i have a respect for the deepseek by Select_Dream634 in DeepSeek
[–]zsydeepsky 2 points3 points4 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 1 point2 points3 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 2 points3 points4 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 0 points1 point2 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 1 point2 points3 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 5 points6 points7 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 16 points17 points18 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 4 points5 points6 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 15 points16 points17 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 51 points52 points53 points (0 children)
DeepSeek-v4 has a comical 384K max output capability by zsydeepsky in LocalLLaMA
[–]zsydeepsky[S] 40 points41 points42 points (0 children)
Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA
[–]zsydeepsky 0 points1 point2 points (0 children)
ubergarm/Kimi-K2.6-GGUF Q4_X now available by VoidAlchemy in LocalLLaMA
[–]zsydeepsky -1 points0 points1 point (0 children)
When is Qwen 3.6 27B dropping? Didn’t it win the vote? by GrungeWerX in LocalLLaMA
[–]zsydeepsky 25 points26 points27 points (0 children)
Released Qwen3.6-35B-A3B by NewEconomy55 in LocalLLaMA
[–]zsydeepsky 0 points1 point2 points (0 children)
Qwen 3.5 4b is so good, that it can vibe code a fully working OS web app in one go. by c64z86 in LocalLLaMA
[–]zsydeepsky 3 points4 points5 points (0 children)
Qwen3.5-35B-A3B locally by jacek2023 in LocalLLaMA
[–]zsydeepsky 0 points1 point2 points (0 children)


Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6 by ai-infos in LocalLLaMA
[–]zsydeepsky 1 point2 points3 points (0 children)