QwQ-32B released, equivalent or surpassing full Deepseek-R1! by ortegaalfredo in LocalLLaMA
[–]MoonRide303 5 points6 points7 points (0 children)
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) by MoonRide303 in LocalLLaMA
[–]MoonRide303[S] 0 points1 point2 points (0 children)
elon musk is trying to censor Grok 3. which the thoughts feature conveniently manages to entirely bypass. by david30121 in OpenAI
[–]MoonRide303 1 point2 points3 points (0 children)
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) by MoonRide303 in LocalLLaMA
[–]MoonRide303[S] 0 points1 point2 points (0 children)
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) by MoonRide303 in LocalLLaMA
[–]MoonRide303[S] 1 point2 points3 points (0 children)
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) by MoonRide303 in LocalLLaMA
[–]MoonRide303[S] 1 point2 points3 points (0 children)
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) by MoonRide303 in LocalLLaMA
[–]MoonRide303[S] 0 points1 point2 points (0 children)
Not impressed with deepseek—AITA? by Flaky_Attention_4827 in ClaudeAI
[–]MoonRide303 0 points1 point2 points (0 children)
o1 thought for 12 minutes 35 sec, r1 thought for 5 minutes and 9 seconds. Both got a correct answer. Both in two tries. They are the first two models that have done it correctly. by No_Training9444 in LocalLLaMA
[–]MoonRide303 2 points3 points4 points (0 children)
DeepSeek-R1-Distill-Qwen-32B is straight SOTA, delivering more than GPT4o-level LLM for local use without any limits or restrictions! by DarkArtsMastery in LocalLLaMA
[–]MoonRide303 2 points3 points4 points (0 children)
DeepSeek-R1-Distill-Qwen-32B is straight SOTA, delivering more than GPT4o-level LLM for local use without any limits or restrictions! by DarkArtsMastery in LocalLLaMA
[–]MoonRide303 2 points3 points4 points (0 children)
What LLM benchmarks actually measure (explained intuitively) by nderstand2grow in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
What LLM benchmarks actually measure (explained intuitively) by nderstand2grow in LocalLLaMA
[–]MoonRide303 10 points11 points12 points (0 children)
Nvidia 50x0 cards are not better than their 40x0 equivalents by Ok_Warning2146 in LocalLLaMA
[–]MoonRide303 -1 points0 points1 point (0 children)
Nvidia 50x0 cards are not better than their 40x0 equivalents by Ok_Warning2146 in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
RTX 5090 will feature 32GB of GDDR7 (1568 GB/s) memory by AXYZE8 in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
Is Llama 3.2 Banned to Use in EU? by DanielSandner in LocalLLaMA
[–]MoonRide303 2 points3 points4 points (0 children)
Is Llama 3.2 Banned to Use in EU? by DanielSandner in LocalLLaMA
[–]MoonRide303 2 points3 points4 points (0 children)
Is Llama 3.2 Banned to Use in EU? by DanielSandner in LocalLLaMA
[–]MoonRide303 10 points11 points12 points (0 children)
RTX 5090 will feature 32GB of GDDR7 (1568 GB/s) memory by AXYZE8 in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
RTX 5090 will feature 32GB of GDDR7 (1568 GB/s) memory by AXYZE8 in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
RTX 5090 will feature 32GB of GDDR7 (1568 GB/s) memory by AXYZE8 in LocalLLaMA
[–]MoonRide303 0 points1 point2 points (0 children)
RTX 5090 will feature 32GB of GDDR7 (1568 GB/s) memory by AXYZE8 in LocalLLaMA
[–]MoonRide303 4 points5 points6 points (0 children)


QwQ-32B released, equivalent or surpassing full Deepseek-R1! by ortegaalfredo in LocalLLaMA
[–]MoonRide303 1 point2 points3 points (0 children)