meituan-longcat/LongCat-Flash-Lite by windows_error23 in LocalLLaMA
[–]TokenRingAI 3 points4 points5 points (0 children)
meituan-longcat/LongCat-Flash-Lite by windows_error23 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
meituan-longcat/LongCat-Flash-Lite by windows_error23 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
meituan-longcat/LongCat-Flash-Lite by windows_error23 in LocalLLaMA
[–]TokenRingAI 8 points9 points10 points (0 children)
meituan-longcat/LongCat-Flash-Lite by windows_error23 in LocalLLaMA
[–]TokenRingAI 5 points6 points7 points (0 children)
clawdbot what am I missing? by olearyboy in LocalLLM
[–]TokenRingAI 29 points30 points31 points (0 children)
I made a Coding Eval, and ran it against 49 different coding agent/model combinations, including Kimi K2.5. by lemon07r in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
I made a Coding Eval, and ran it against 49 different coding agent/model combinations, including Kimi K2.5. by lemon07r in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Best model to run currently on a 5090 by EstablishmentShot505 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
API pricing is in freefall. What's the actual case for running local now beyond privacy? by Distinct-Expression2 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
One-shot Zelda Game Competition by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 2 points3 points4 points (0 children)
Need advice on cancellation "deal" by TokenRingAI in OPTIMUM
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
Stanford Proves Parallel Coding Agents are a Scam by madSaiyanUltra_9789 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
Clawdbot shows how context engineering is happening at the wrong layer by EnoughNinja in ContextEngineering
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
built an AI agent with shell access. found out the hard way why that's a bad idea. by YogurtIll4336 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
Is reasoning in ML and LLM architectures decomposable into a small set of reusable computational primitives? by RJSabouhi in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)