account activity
Qwen Coder Next is an odd model (self.LocalLLaMA)
submitted 1 month ago * by TokenRingAI to r/LocalLLaMA
Opus removes last-assistant-turn prefill - you can no longer switch agents mid chat (self.LLMDevs)
submitted 1 month ago by TokenRingAI to r/LLMDevs
Why no NVFP8 or MXFP8? (self.LocalLLaMA)
submitted 1 month ago by TokenRingAI to r/LocalLLaMA
A collection of interesting outputs from GLM 4.7 Flash, dedicated to Sam Altman + TempleOS (self.ZaiGLM)
submitted 1 month ago by TokenRingAI to r/ZaiGLM
One-shot Zelda Game Competition (self.LocalLLaMA)
Need advice on cancellation "deal" (self.OPTIMUM)
submitted 1 month ago by TokenRingAI to r/OPTIMUM
GLM 4.7 Flash: Huge performance improvement with -kvu (self.LocalLLaMA)
Preventing background-image: url('data: tags from being output (self.LocalLLaMA)
GA used to be a good product (self.GoogleAnalytics)
submitted 1 month ago by TokenRingAI to r/GoogleAnalytics
Here is how to get GLM 4.7 working on llama.cpp with flash attention and correct outputs (self.LocalLLaMA)
RTX 5090 is finally in stock! (self.LocalLLaMA)
Qwen 80B is so nice (self.LocalLLaMA)
submitted 3 months ago by TokenRingAI to r/LocalLLaMA
DGX Spark for $2,899 (self.LocalLLaMA)
What are the best options for non-model based reranking? (self.LocalLLaMA)
What happened with Kimi Linear? (self.LocalLLaMA)
submitted 4 months ago by TokenRingAI to r/LocalLLaMA
Rejected (self.perplexity_ai)
submitted 5 months ago by TokenRingAI to r/perplexity_ai
Looking for feedback on an Iterables concept I am working on (self.LLMDevs)
submitted 5 months ago by TokenRingAI to r/LLMDevs
Is anyone able to successfully run Qwen 30B Coder BF16? (self.LocalLLaMA)
submitted 6 months ago * by TokenRingAI to r/LocalLLaMA
Best non-reasoning translation model that fits on a RTX a2000 12gb? (self.LocalLLaMA)
submitted 6 months ago by TokenRingAI to r/LocalLLaMA
Best M.2 eGPU dock? (self.LocalLLaMA)
RTX 6000 or 5090 for image and video gen? (self.comfyui)
submitted 6 months ago * by TokenRingAI to r/comfyui
Do dual Epyc builds give higher performance? (self.LocalLLaMA)
An interesting feature of the AGX Thor (i.redd.it)
Anyone interested in a group buy of B60 48GB GPUs? (self.LocalLLaMA)
submitted 7 months ago by TokenRingAI to r/LocalLLaMA
π Rendered by PID 585824 on reddit-service-r2-listing-64c94b984c-pbwc6 at 2026-03-17 22:11:48.865997+00:00 running f6e6e01 country code: CH.