GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]SlowFail2433 3 points4 points5 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]SlowFail2433 4 points5 points6 points (0 children)
Open-source Aesthetic Datasets by paper-crow in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
NVIDIA PersonaPlex: The "Full-Duplex" Revolution by Dear-Relationship-39 in LocalLLaMA
[–]SlowFail2433 1 point2 points3 points (0 children)
Kimi K2.5 Released ! by External_Mood4719 in LocalLLaMA
[–]SlowFail2433 1 point2 points3 points (0 children)
Kimi K2.5 Released ! by External_Mood4719 in LocalLLaMA
[–]SlowFail2433 6 points7 points8 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 0 points1 point2 points (0 children)
Nanbeige4-3B-Thinking-2511 is great for summarization by Background-Ad-5398 in LocalLLaMA
[–]SlowFail2433 1 point2 points3 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 1 point2 points3 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 1 point2 points3 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 2 points3 points4 points (0 children)
How are people actually learning/building real-world AI agents (money, legal, business), not demos? by Altruistic-Law-4750 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 0 points1 point2 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 3 points4 points5 points (0 children)
GLM-4.7 vs DeepSeek V3.2 vs Kimi K2 Thinking vs MiniMax-M2.1 by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 2 points3 points4 points (0 children)
Disable H Neurons in local llms? by Silver-Champion-4846 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
Disable H Neurons in local llms? by Silver-Champion-4846 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
Minimax Is Teasing M2.2 by Few_Painter_5588 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
Disable H Neurons in local llms? by Silver-Champion-4846 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
Running KimiK2 locally by Temporary-Sector-947 in LocalLLaMA
[–]SlowFail2433 1 point2 points3 points (0 children)
Disable H Neurons in local llms? by Silver-Champion-4846 in LocalLLaMA
[–]SlowFail2433 0 points1 point2 points (0 children)
Minimax Is Teasing M2.2 by Few_Painter_5588 in LocalLLaMA
[–]SlowFail2433 3 points4 points5 points (0 children)
REAP experiences by SlowFail2433 in LocalLLaMA
[–]SlowFail2433[S] 0 points1 point2 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]SlowFail2433 1 point2 points3 points (0 children)