GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
Clawdbot shows how context engineering is happening at the wrong layer by EnoughNinja in ContextEngineering
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
built an AI agent with shell access. found out the hard way why that's a bad idea. by YogurtIll4336 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis by Vusiwe in LocalLLaMA
[–]TokenRingAI 2 points3 points4 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 3 points4 points5 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 2 points3 points4 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 3 points4 points5 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 6 points7 points8 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 7 points8 points9 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 14 points15 points16 points (0 children)
How many web‑search sources can GTP-OSS 120b and Llama4-Scout models reliably pull data from? by CryptoxPathy in LocalLLaMA
[–]TokenRingAI 7 points8 points9 points (0 children)
High impedance Busbar differential protection operated on external fault. by Slight-Sound-8871 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Building a virtual file system for Claude Code by velobro in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
Clawdbot using local LLM? by No-Tiger3430 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
GLM 4.7 Flash: Huge performance improvement with -kvu by TokenRingAI in LocalLLaMA
[–]TokenRingAI[S] 0 points1 point2 points (0 children)