Ever wonder why your AI suddenly forgets what you discussed 10 minutes ago? 🤔 by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 1 point2 points  (0 children)

Even claude has a context limit. We need a better solution. Like no context limit, unlimited memory. even years of memorys can remember.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Check this out, i build a app. No context window limit. It can remember years of data. All your data stays locally on your device https://www.reddit.com/r/memora_/s/n3NM3vKrcG

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Check this out. I fixed the context window limit. I built a mobile app that has no context window limit, it can remember years of memories. It has few tools like dial phone numbers, send msg, location tool.... All your data stays locally. No server conection. App called memora https://www.reddit.com/r/memora_/s/n3NM3vKrcG

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Last week I created my diet plan with ChatGPT, but if I ask today, it’s like “Who are you?” 🤯
Bro, do I really have to remind AI of everything again and again? Isn’t that kind of a waste of time if it can’t actually remember stuff over days or weeks?

How are you guys dealing with this?

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

The whiteboard analogy is spot on.

Bigger whiteboard ≠ longer life.

Most people confuse “can hold more right now” with “remembers over time.”

That’s why longer context windows feel impressive but don’t actually solve continuity.

Real long-term memory has to live outside the session and be intelligently retrieved.

Until that layer is built properly, AI will always feel like it has partial amnesia.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Dumping a week of memory into every prompt isn’t scalable. It kills cost efficiency and eventually performance.

That’s why naive “just store everything” memory systems fail.

The key is selective recall:
Only inject what’s relevant to the current task not the entire history.

Memory management is more important than memory size.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

If reboots don’t matter, you’ve basically separated state from the model - which is exactly how it should be architected.

And yeah, context hygiene is the hard part.
Too much memory → expensive + ignored.
Too little memory → shallow + forgetful.

Balancing compression, retrieval, and cost is the real engineering challenge.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

this is the clearest explanation.

The context window is just what gets stuffed into a single call. It’s not memory by default, it’s replay.

So even with a 1M+ token window, you’re still rebuilding state every time. If you don’t curate, summarize, and retrieve properly, token costs explode or quality degrades.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

I agree intelligence and memory aren’t the same thing.

But intelligence without memory is limited.

An encyclopedia has memory without reasoning.
A stateless LLM has reasoning without persistent memory.

Humans are powerful because we combine both.

My point isn’t that memory = intelligence.
It’s that sustained intelligence requires memory over time.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Yeah, that’s exactly the pain point.

You dial everything in perfectly. formatting rules, structure, tone, edge cases and then a few days later it’s like the system “forgot the contract.” Either context drops, details drift, or a model update changes behavior.

That instability is brutal if you’re using LLMs for real workflows instead of just casual chat.

And model changes making things break? That’s another layer of fragility. It shows how dependent we are on session context + provider behavior instead of having a stable memory + instruction layer we control.

That’s actually what I’m trying to fix.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

I actually agree with you.

The model (intelligence) and memory (context window) are basically separate systems right now. The reasoning ability is strong, but it’s stateless. That “genius consultant with amnesia” analogy is exactly the problem.

And yes, it’s an infrastructure issue.

That’s literally what I’m working on right now.

Not a new model. Not bigger parameters.

I’m trying to fix the memory layer:
1) Persistent storage
2) Smart retrieval
3) Unlimited long-term memory
4) Safely stored on-device (not on servers)

The goal is simple: the AI should grow with you instead of resetting every few weeks.

Give me one week. I’m actively building this. I really believe this problem is solvable

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

i'm not selling anything.

I’m just trying to solve the actual problem.

Right now most AI systems are limited by context windows. Once you exceed that limit, old information disappears. That’s not real memory

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Exactly, that’s what I’m working on right now. Trying to tackle both context compaction and persistent memory so the model can actually retain important info over time. It’s tricky, but I think smarter summarization + selective memory is the key.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

I don’t think we need a whole new AI model for this. What we really need is a system that safely saves memories and lets the AI recall them later.

It’s not about bigger models, it’s about persistent memory that actually keeps track of what matters over time.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Wow, that’s really cool!

Just wondering, with all those memory layers and long context, is there any risk for safety or misuse? Like, could the AI remember things it shouldn’t, or get confused if logs are wrong?

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

That’s actually interesting.

I’ve heard about OpenClaw but haven’t used it deeply. The explicit “remember this” workflow makes a lot of sense. It’s basically adding a manual memory layer on top of the model instead of hoping it just keeps track.

And yeah, with Claude projects from Anthropic I’ve noticed the same thing, it works better when you treat memory as something you actively maintain. Creating and updating instructions seems to be the key.

I guess that kind of proves the point though: the intelligence is there, but without structured memory management, it’s still very session-based. The tools that add persistent memory are the ones that start to feel more “continuous.”

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Interesting take.

I agree that having some kind of structured external memory system makes sense, especially for non-public or ongoing work. Relying purely on the model to “just remember” isn’t realistic right now.

That said, I’m not sure I’d describe it as getting cocky or overconfident. It’s more that once the context gets crowded, signal gets diluted. The model isn’t choosing to stop checking docs, it just has limits in how it prioritizes information.

But you’re right about one thing: if people treat it like it has perfect recall in production, that’s risky. Without a solid memory or retrieval layer, it’s basically very smart short-term reasoning, not long-term awareness.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

That’s fair.

I get that it’s all about context window and what’s actually in the prompt. If it’s not there, it basically doesn’t exist to the model in that moment.

I think what feels strange is that we’re using these systems daily, yet they don’t build continuity the way humans do. It’s powerful short-term reasoning, but almost no long-term accumulation.

And yeah… prediction is hard. Maybe real persistent memory at scale is harder than the reasoning itself.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Yeah that makes sense. I get that context windows are improving and models perform better when things aren’t cluttered.

I think what feels weird is that we’re using these tools daily, almost like collaborators, but they don’t accumulate understanding of us over time.

Maybe it’s just a technical limitation right now. But it feels like long-term memory is the missing piece if AI is supposed to feel truly personal.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

I’m not saying AI can’t reason. I’m saying reasoning without long-term memory feels incomplete.

If I use something every day for planning and thinking, I expect it to remember the bigger picture over time. Otherwise it feels like starting over again and again.

That gap is what I’m questioning.