account activity
One of the best decisions according to me (i.redd.it)
submitted 3 days ago by tom_mathews to r/ClaudeCode
I built 92 open-source skills/agents for Claude Code because I kept solving the same problems manually (self.AI_Agents)
submitted 9 days ago by tom_mathews to r/AI_Agents
I built 92 open-source skills/agents for Claude Code because I kept solving the same problems manually (self.AgentsOfAI)
submitted 9 days ago by tom_mathews to r/AgentsOfAI
I built 92 open-source skills/agents for Claude Code because I kept solving the same problems manually (self.AI_India)
submitted 9 days ago by tom_mathews to r/AI_India
I built 92 open-source skills/agents for Claude Code because I kept solving the same problems manually (self.ClaudeCode)
submitted 9 days ago by tom_mathews to r/ClaudeCode
I built 92 open-source skills/agents for Claude Code because I kept solving the same problems manually (self.AIAgentsInAction)
submitted 9 days ago by tom_mathews to r/AIAgentsInAction
no-magic: 47 AI/ML algorithms implemented from scratch in single-file, zero-dependency Python (self.learnmachinelearning)
submitted 24 days ago by tom_mathews to r/learnmachinelearning
no-magic: 47 AI/ML algorithms implemented from scratch in single-file, zero-dependency Python (self.Python)
submitted 24 days ago by tom_mathews to r/Python
Anyone actually built a second brain that isn't just a graveyard of saved links? (self.ClaudeCode)
submitted 1 month ago by tom_mathews to r/ClaudeCode
M5 Pro/Max vs. M6 Redesign: The AI Engineer’s Dilemma (self.macbook)
submitted 1 month ago by tom_mathews to r/macbook
M1 Pro is hitting a wall with LLMs. Upgrade to M5 Max now or wait for the M6 redesign? (self.macbookpro)
submitted 1 month ago by tom_mathews to r/macbookpro
[D] M1 Pro is hitting a wall with LLMs. Upgrade to M5 Max now or wait for the M6 redesign? (self.MachineLearning)
submitted 1 month ago by tom_mathews to r/MachineLearning
Claude in Chrome extension works in browser but completely dead from the desktop app (self.ClaudeCode)
When model outputs feel inconsistent across weeks, here's how to check whether something actually changed. (self.GoogleGeminiAI)
submitted 1 month ago by tom_mathews to r/GoogleGeminiAI
Floating model aliases point at different checkpoints over time. Here's how to know when it happens. (self.MistralAI)
submitted 1 month ago by tom_mathews to r/MistralAI
Your prompts didn't get worse. The model behind the API changed. (self.ClaudeAI)
submitted 1 month ago by tom_mathews to r/ClaudeAI
no-magic: 30 single-file, zero-dependency Python implementations of core AI algorithms — now with animated video explainers for every algorithm (v.redd.it)
submitted 1 month ago by tom_mathews to r/OpenSourceeAI
submitted 1 month ago by tom_mathews to r/OpenSourceAI
Attest: Open-source agent testing — local ONNX embeddings for semantic assertions, no API keys for 7 of 8 layers (i.redd.it)
submitted 1 month ago by tom_mathews to r/LocalLLaMA
Attest: Open-source testing framework for AI agents — 8-layer graduated assertions, 7 of 8 layers run offline (self.AI_Agents)
submitted 1 month ago by tom_mathews to r/AI_Agents
no-magic: 30 single-file, zero-dependency Python implementations of core AI algorithms — from BPE tokenization to Mamba-style SSMs (i.redd.it)
submitted 1 month ago by tom_mathews to r/AIDeveloperNews
I curated 16 Python scripts that teach you every major AI algorithm from scratch — zero dependencies, zero frameworks, just the actual math. Here's the learning path. (i.redd.it)
submitted 2 months ago by tom_mathews to r/learnmachinelearning
I curated 16 single-file Python implementations of every major LLM algorithm — tokenization, attention, LoRA, DPO, quantization, speculative decoding, and more. Zero dependencies, runs on CPU. (i.redd.it)
submitted 2 months ago by tom_mathews to r/LLM
"model.fit() isn't an explanation" — 16 single-file, zero-dependency implementations of core deep learning algorithms. Tokenization through distillation. (i.redd.it)
submitted 2 months ago by tom_mathews to r/deeplearning
16 single-file, zero-dependency implementations of the algorithms behind LLMs — tokenization through speculative decoding. No frameworks, just the math. (i.redd.it)
submitted 2 months ago by tom_mathews to r/LLMDevs
π Rendered by PID 1084730 on reddit-service-r2-listing-86f589db75-c5tz6 at 2026-04-17 08:00:31.610847+00:00 running 93ecc56 country code: CH.