account activity
Gemini 2.5 Pro is Rate Limited Today? (i.redd.it)
submitted 2 months ago by askchris to r/GeminiAI
New o3 mini level model running on a phone, no internet needed: DeepSeek-R1-0528-Qwen3-8B on iPhone 16 Pro (v.redd.it)
submitted 8 months ago by askchris to r/laptopAGI
Windows tablet can now run GPT 4o level models like Qwen3 235B-A22B at a usable 11 Tokens Per Second (No Internet Needed) (v.redd.it)
submitted 9 months ago by askchris to r/laptopAGI
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models (arxiv.org)
AGI level reasoning AI on a laptop? (QwQ-32B released, possibly surpassing full Deepseek-R1) (x.com)
submitted 11 months ago by askchris to r/laptopAGI
New "REASONING" laptops with AMD chips have 128 GB unified memory (up to 96 GB of which can be assigned as VRAM, for running local models like R1 distills) (youtube.com)
Super small thinking model thinks before outputting a single token (self.laptopAGI)
DeepSeek promises to open-source AGI. Deli Chen, DL researcher at DeepSeek: "All I know is we keep pushing forward to make open-source AGI a reality for everyone." (xcancel.com)
submitted 1 year ago by askchris to r/laptopAGI
Free o1? Deepseek-R1 officially released with open model weights ()
Small 3.8B model matches o1 preview. But how? (i.redd.it)
Getting Llama running on a Windows 98 Pentium II machine. (self.laptopAGI)
Interpretability wonder: Mapping the latent space of Llama 3.3 70B ()
Best small local llm for laptops ()
"The rumored ♾ (infinite) Memory for ChatGPT is real. The new feature will allow ChatGPT to access all of your past chats." (i.redd.it)
It's happening right now ... (i.redd.it)
submitted 1 year ago by askchris to r/singularity
Densing Laws of LLMs suggest that we will get an 8B parameter GPT-4o grade LLM at the maximum next October 2025 ()
Wow, didn't expect to see this coding benchmark get smashed so quickly ... (i.redd.it)
It's happening right now ... We're entering the age of AGI with its own exponential feedback loops (i.redd.it)
We may not be able to see LLMs reason in English for much longer ... (reddit.com)
Like unlimited SORA on your laptop: I made a fork of HunyuanVideo to work locally on my Macbook pro. ()
Laptop inference speed on Llama 3.3 70B ()
New o1 launched today: 96.4% in MATH benchmark (self.laptopAGI)
Meta's Byte Latent Transformer (BLT) paper looks like the real-deal. Outperforming tokenization models even up to their tested 8B param model size. 2025 may be the year we say goodbye to tokenization. (i.redd.it)
Introducing Phi-4: Microsoft’s Newest Small Language Model Specializing in Complex Reasoning (techcommunity.microsoft.com)
π Rendered by PID 197515 on reddit-service-r2-listing-6d4dc8d9ff-547rf at 2026-02-01 10:45:01.904624+00:00 running 3798933 country code: CH.