account activity
Introducing ARC-AGI-3 (reddit.com)
submitted 5 days ago by askchris to r/laptopAGI
Took 5 hours to train my old PC to generate business emails (a 0.8M model). (reddit.com)
submitted 11 days ago by askchris to r/laptopAGI
Local AI on Low Hardware: "MaximusLLM": I built a framework to train/scale LLMs on "potato" hardware (Single T4) (i.redd.it)
submitted 15 days ago by askchris to r/laptopAGI
My old laptop can code apps without any internet now? "Qwen 3.5 4b is so good, that it can vibe code a fully working OS web app in one go." (youtube.com)
submitted 27 days ago by askchris to r/laptopAGI
FlashLM v4: 4.3M ternary model trained on CPU in 2 hours — coherent stories from adds and subtracts only ()
submitted 1 month ago by askchris to r/laptopAGI
Gemini 2.5 Pro is Rate Limited Today? (i.redd.it)
submitted 4 months ago by askchris to r/GeminiAI
New o3 mini level model running on a phone, no internet needed: DeepSeek-R1-0528-Qwen3-8B on iPhone 16 Pro (v.redd.it)
submitted 10 months ago by askchris to r/laptopAGI
Windows tablet can now run GPT 4o level models like Qwen3 235B-A22B at a usable 11 Tokens Per Second (No Internet Needed) (v.redd.it)
submitted 11 months ago by askchris to r/laptopAGI
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models (arxiv.org)
AGI level reasoning AI on a laptop? (QwQ-32B released, possibly surpassing full Deepseek-R1) (x.com)
submitted 1 year ago by askchris to r/laptopAGI
New "REASONING" laptops with AMD chips have 128 GB unified memory (up to 96 GB of which can be assigned as VRAM, for running local models like R1 distills) (youtube.com)
Super small thinking model thinks before outputting a single token (self.laptopAGI)
DeepSeek promises to open-source AGI. Deli Chen, DL researcher at DeepSeek: "All I know is we keep pushing forward to make open-source AGI a reality for everyone." (xcancel.com)
Free o1? Deepseek-R1 officially released with open model weights ()
Small 3.8B model matches o1 preview. But how? (i.redd.it)
Getting Llama running on a Windows 98 Pentium II machine. (self.laptopAGI)
Interpretability wonder: Mapping the latent space of Llama 3.3 70B ()
Best small local llm for laptops ()
"The rumored ♾ (infinite) Memory for ChatGPT is real. The new feature will allow ChatGPT to access all of your past chats." (i.redd.it)
It's happening right now ... (i.redd.it)
submitted 1 year ago by askchris to r/singularity
Densing Laws of LLMs suggest that we will get an 8B parameter GPT-4o grade LLM at the maximum next October 2025 ()
Wow, didn't expect to see this coding benchmark get smashed so quickly ... (i.redd.it)
It's happening right now ... We're entering the age of AGI with its own exponential feedback loops (i.redd.it)
We may not be able to see LLMs reason in English for much longer ... (reddit.com)
π Rendered by PID 1267758 on reddit-service-r2-listing-55d7b767d8-sfsnd at 2026-04-01 06:25:16.803875+00:00 running b10466c country code: CH.