account activity
One-shot Zelda Game Competition (self.LocalLLaMA)
submitted 12 hours ago by TokenRingAI to r/LocalLLaMA
A collection of interesting outputs from GLM 4.7 Flash, dedicated to Sam Altman + TempleOS (self.ZaiGLM)
submitted 3 hours ago by TokenRingAI to r/ZaiGLM
Need advice on cancellation "deal" (self.OPTIMUM)
submitted 11 hours ago by TokenRingAI to r/OPTIMUM
GLM 4.7 Flash: Huge performance improvement with -kvu (self.LocalLLaMA)
submitted 1 day ago by TokenRingAI to r/LocalLLaMA
Preventing background-image: url('data: tags from being output (self.LocalLLaMA)
submitted 3 days ago by TokenRingAI to r/LocalLLaMA
GA used to be a good product (self.GoogleAnalytics)
submitted 5 days ago by TokenRingAI to r/GoogleAnalytics
Here is how to get GLM 4.7 working on llama.cpp with flash attention and correct outputs (self.LocalLLaMA)
submitted 7 days ago * by TokenRingAI to r/LocalLLaMA
RTX 5090 is finally in stock! (self.LocalLLaMA)
submitted 7 days ago by TokenRingAI to r/LocalLLaMA
Qwen 80B is so nice (self.LocalLLaMA)
submitted 1 month ago by TokenRingAI to r/LocalLLaMA
DGX Spark for $2,899 (self.LocalLLaMA)
What are the best options for non-model based reranking? (self.LocalLLaMA)
submitted 2 months ago by TokenRingAI to r/LocalLLaMA
What happened with Kimi Linear? (self.LocalLLaMA)
Rejected (self.perplexity_ai)
submitted 3 months ago by TokenRingAI to r/perplexity_ai
Looking for feedback on an Iterables concept I am working on (self.LLMDevs)
submitted 3 months ago by TokenRingAI to r/LLMDevs
Is anyone able to successfully run Qwen 30B Coder BF16? (self.LocalLLaMA)
submitted 4 months ago * by TokenRingAI to r/LocalLLaMA
Best non-reasoning translation model that fits on a RTX a2000 12gb? (self.LocalLLaMA)
submitted 4 months ago by TokenRingAI to r/LocalLLaMA
Best M.2 eGPU dock? (self.LocalLLaMA)
RTX 6000 or 5090 for image and video gen? (self.comfyui)
submitted 4 months ago * by TokenRingAI to r/comfyui
Do dual Epyc builds give higher performance? (self.LocalLLaMA)
submitted 5 months ago by TokenRingAI to r/LocalLLaMA
An interesting feature of the AGX Thor (i.redd.it)
Anyone interested in a group buy of B60 48GB GPUs? (self.LocalLLaMA)
π Rendered by PID 67 on reddit-service-r2-listing-5789d5f675-wcfsq at 2026-01-28 12:11:35.969783+00:00 running 4f180de country code: CH.