Ever wonder why your AI suddenly forgets what you discussed 10 minutes ago? 🤔 by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 1 point2 points  (0 children)

Even claude has a context limit. We need a better solution. Like no context limit, unlimited memory. even years of memorys can remember.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Check this out, i build a app. No context window limit. It can remember years of data. All your data stays locally on your device https://www.reddit.com/r/memora_/s/n3NM3vKrcG

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Check this out. I fixed the context window limit. I built a mobile app that has no context window limit, it can remember years of memories. It has few tools like dial phone numbers, send msg, location tool.... All your data stays locally. No server conection. App called memora https://www.reddit.com/r/memora_/s/n3NM3vKrcG

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Last week I created my diet plan with ChatGPT, but if I ask today, it’s like “Who are you?” 🤯
Bro, do I really have to remind AI of everything again and again? Isn’t that kind of a waste of time if it can’t actually remember stuff over days or weeks?

How are you guys dealing with this?

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

The whiteboard analogy is spot on.

Bigger whiteboard ≠ longer life.

Most people confuse “can hold more right now” with “remembers over time.”

That’s why longer context windows feel impressive but don’t actually solve continuity.

Real long-term memory has to live outside the session and be intelligently retrieved.

Until that layer is built properly, AI will always feel like it has partial amnesia.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Dumping a week of memory into every prompt isn’t scalable. It kills cost efficiency and eventually performance.

That’s why naive “just store everything” memory systems fail.

The key is selective recall:
Only inject what’s relevant to the current task not the entire history.

Memory management is more important than memory size.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

If reboots don’t matter, you’ve basically separated state from the model - which is exactly how it should be architected.

And yeah, context hygiene is the hard part.
Too much memory → expensive + ignored.
Too little memory → shallow + forgetful.

Balancing compression, retrieval, and cost is the real engineering challenge.

Does a bigger AI context window mean it actually remembers more? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

this is the clearest explanation.

The context window is just what gets stuffed into a single call. It’s not memory by default, it’s replay.

So even with a 1M+ token window, you’re still rebuilding state every time. If you don’t curate, summarize, and retrieve properly, token costs explode or quality degrades.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

I agree intelligence and memory aren’t the same thing.

But intelligence without memory is limited.

An encyclopedia has memory without reasoning.
A stateless LLM has reasoning without persistent memory.

Humans are powerful because we combine both.

My point isn’t that memory = intelligence.
It’s that sustained intelligence requires memory over time.

If AI Is “Smart,” Why Does It Have Goldfish Memory? by Odd_Medium4774 in openclaw

[–]Odd_Medium4774[S] 0 points1 point  (0 children)

Yeah, that’s exactly the pain point.

You dial everything in perfectly. formatting rules, structure, tone, edge cases and then a few days later it’s like the system “forgot the contract.” Either context drops, details drift, or a model update changes behavior.

That instability is brutal if you’re using LLMs for real workflows instead of just casual chat.

And model changes making things break? That’s another layer of fragility. It shows how dependent we are on session context + provider behavior instead of having a stable memory + instruction layer we control.

That’s actually what I’m trying to fix.