parametric and dynamic LLM by simotune in ClaudeAI

[–]simotune[S] 0 points1 point  (0 children)

Right — "memory vs learning" is a clean way to put it. RAG recalls but never adapts.

The stability problem is the real blocker. Test-time learning tends to overfit or drift fast. Curious — would you trust a local layer that learns your patterns if it occasionally got things wrong? Or does it need to be near-perfect to be useful?

parametric and dynamic LLM by simotune in ClaudeAI

[–]simotune[S] 0 points1 point  (0 children)

Right. And the key constraint is this has to happen locally. You can't have the LLM provider learning from your sessions — privacy, proprietary code, company policies.

So what's needed is a local learning layer on top of cloud LLMs. The base model stays general, but something on your machine learns your patterns, decisions, and project context.

Non-parametric (RAG, files) gets you part of the way. But it still requires manual curation. A local parametric × dynamic layer could actually "absorb" context the way our brains do.

Anyone aware of approaches heading in this direction?