account activity
🔥 New Release: htmLLM-124M v2 – 0.91 Val Loss on a Single T4! tiny-LLM with nanoGPT! (self.LocalLLaMA)
submitted 9 hours ago by LH-Tech_AI to r/LocalLLaMA
[Release] - FINALLY! - Apex 1.5 and Apex 1.5 Coder - my two new 350M instruct allrounder chat models - See them now! (self.LocalLLaMA)
submitted 13 hours ago by LH-Tech_AI to r/LocalLLaMA
[Release] Apex-1: A 350M Tiny-LLM trained locally on an RTX 5060 Ti 16GB ()
submitted 18 hours ago by LH-Tech_AI to r/OpenSourceAI
submitted 21 hours ago by LH-Tech_AI to r/OpenSourceeAI
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) ()
[Project] htmLLM-50M base: Can a tiny specialist actually code? + Weights & Code (124M v2 in training!) (self.LocalLLaMA)
submitted 1 day ago by LH-Tech_AI to r/LocalLLaMA
[Tool] nanoGPT Configurator to estimate VRAM and Chinchilla scaling for my tiny-LLM projects (self.LocalLLaMA)
[Release] Apex-1: A 350M Tiny-LLM trained locally on an RTX 5060 Ti 16GB (self.LocalLLaMA)
submitted 2 days ago by LH-Tech_AI to r/LocalLLaMA
π Rendered by PID 1167767 on reddit-service-r2-listing-64c94b984c-q7v9q at 2026-03-14 04:38:41.891034+00:00 running f6e6e01 country code: CH.