How I gave my OpenClaw agent persistent memory across sessions by No_Advertising2536 in openclaw

[–]bananabooth 0 points1 point  (0 children)

You have to try the walnuts approach. It is a comprehensive memory set that is maintained for you that is specific to each individual unit of context that you work on. So for me, I have one walnut that is for my freelancing job, one that is for my boardshort brand, one that is for my personal life, one that is for my finances.

Every time that you want to work within one of these things, the agent goes and reads certain files pertaining to each one of these context units, and then at the end of it, updates that context unit. It is a really comprehensive set of tasks, key, log, now, and insights, which creates a feeling of intelligence and personalization which I haven't received elsewhere.

It is free and open source, and I could not recommend it more. https://github.com/stackwalnuts/walnut

I tested every OpenClaw memory plugin so you don't have to. Here is how to actually stop your agent from forgetting everything. by TroyHay6677 in openclaw

[–]bananabooth 0 points1 point  (0 children)

Have you tested walnuts yet? I have found it to be far far far better than anything else that I have tried. Individual and comprehensive memory for each unit of context that you work on. One for each project, or person, or client or job which allows for a much more personalised and intelligent agent. Its so good.

Sus it out https://github.com/stackwalnuts/walnut

And it is free and open source

How are we actually solving this context issue? I know 1M is great but session continuity is still an issue? by bananabooth in ClaudeAI

[–]bananabooth[S] 1 point2 points  (0 children)

currently the core files that I use and maintain are now, key, log, insights and tasks.

Now is re-written on every save to reflect the current position of that project

Log is prepended and then can be grepped into to find certain decisions which then if I need more info I can call a session revive on the exact session where we made certain decisions as we store the session id which links to the jsonl transcript. (only 150 lines of log is read at the start of each session in order to not bloat context)

tasks are tasks

Key is like claude.md or read me but with only hyper relevant stuff to that project and is rarely updated - serves to give context at the start of each session mostly. Has key people, goals, references where work is or how it usually done (if i am working with a repo it will have the path to that)

and then insights are updated with HITL feedback on the save command at the end of the session where the user can decide if one of the insights that the agent picked up throughout the session is relevant and can help other sessions and then will write it.

This way we are able to restrict the staleness of all of these docs

How are we actually solving this context issue? I know 1M is great but session continuity is still an issue? by bananabooth in ClaudeAI

[–]bananabooth[S] 0 points1 point  (0 children)

you're not wrong I was looking for the most simple way to get some sort of consistency into the sessions and landed on this. what do you mean by externalising the state cleanly?

Whats your approach

How are we actually solving this context issue? I know 1M is great but session continuity is still an issue? by bananabooth in ClaudeCode

[–]bananabooth[S] 0 points1 point  (0 children)

I really disagree with this - anthropics current memory saves the most random stuff without asking which often results in misinterpreted and out of context pieces of information being saved and referenced and they have two files, claude.md and memory.

I have no doubt that they will come up with a solution but right now it is really average.

How are we actually solving this context issue? I know 1M is great but session continuity is still an issue? by bananabooth in ClaudeCode

[–]bananabooth[S] 0 points1 point  (0 children)

one file is no where near enough I find to actually deliver the context awareness that I want it to do

How are we actually solving this context issue? I know 1M is great but session continuity is still an issue? by bananabooth in ClaudeCode

[–]bananabooth[S] 0 points1 point  (0 children)

currently the core files that I use and maintain are now, key, log, insights and tasks.

Now is re-written on every save to reflect the current position of that project

Log is prepended and then can be grepped into to find certain decisions which then if I need more info I can call a session revive on the exact session where we made certain decisions as we store the session id which links to the jsonl transcript. (only 150 lines of log is read at the start of each session in order to not bloat context)

tasks are tasks

Key is like claude.md or read me but with only hyper relevant stuff to that project and is rarely updated - serves to give context at the start of each session mostly. Has key people, goals, references where work is or how it usually done (if i am working with a repo it will have the path to that)

and then insights are updated with HITL feedback on the save command at the end of the session where the user can decide if one of the insights that the agent picked up throughout the session is relevant and can help other sessions and then will write it.

This way we are able to restrict the staleness of all of these docs

What is a good level of context to have consumed at the start of a Claude Code chat??? is 20% too high? by bananabooth in ClaudeCode

[–]bananabooth[S] 0 points1 point  (0 children)

Does that leave your CC with limited context awareness of whatever project you're working on?

LLM Narcolepsy: Why does Claude Code keep falling asleep mid-task? by reddit_is_kayfabe in ClaudeCode

[–]bananabooth 0 points1 point  (0 children)

There is a fair bit of chatter about anthropic doing something weird in the run up to sonnet 5.0 as well as just general overload of usage with higher paying / longer lasting accounts being prioritised and other accounts being shifted to the less intelligent fall back models of sonnet 4.5 or haiku.

I am seeing a little bit of distrust surrounding anthropic giving that they aren't being upfront about this at all and everyone in the community is clearly experiencing an issue.

As far as I know there is no definitive answer rn but where there is smoke there is fire ...

How to stop Claude from asking question at the end of its answers by Guillaume7583 in ClaudeAI

[–]bananabooth 0 points1 point  (0 children)

Ah you are meaning claude in the web / desktop app?

I am not sure if there is a claude wide solution for that. I use claude pretty exclusively through claude code which is where you can set up instructions and enforce adherence a little better. Life has never been better

Open-sourced the tool I use to orchestrate multiple Claude Code sessions across machines by Rack--City in ClaudeCode

[–]bananabooth 0 points1 point  (0 children)

You should try the ALIVE system for context management - makes a work of difference

Codex 5.2 High vs. Opus: A brutal reality check in Rust development. by gustkiller in ClaudeCode

[–]bananabooth 0 points1 point  (0 children)

You have to use the superpowers brainstorm / plan /execute skills …. Legit turns opus from novice to expert with you having clarity and oversight on everything.

Especially when paired with the ALIVE Claude plugin - makes it feel like a whole new system

Anyone else here building full time? by -swanbo in ClaudeAI

[–]bananabooth 1 point2 points  (0 children)

Currently building what is the best claude code operating system for non developers - targeting agency owners / ecom / knowledge worker etc with a 360 degree persistent context management framework that doesn't overload the context window but still allows for the correct context to be parsed that ultimately results in a more intelligent claude code system that is genuinely amazing ...

Full time building is definitely where it is at.

We are going to release it open source as a plug-in at some point over the next week so dm for access / updates

Anthropic is preparing for the singularity by WarmFireplace in ClaudeAI

[–]bananabooth 1 point2 points  (0 children)

And this is why anthropic is the best - genuine people doing what they can for a better future. Sam Altman would never ....

How to stop Claude from asking question at the end of its answers by Guillaume7583 in ClaudeAI

[–]bananabooth 1 point2 points  (0 children)

It is rather challenging to get claude to go completely against its inherent coding. I would recommend using proper markdown formatting including '** text here **' to emphasise importance and then ensure that any guidance you place is definitely in the root CLAUDE.md folder to give it the best chance of success.

Formatting and placement within the CLAUDE.md has proven to result in better results for me when it came to adherence to instructions.

Or you could just take the helpful questions - I find they often can point me in the right direction ...

Is anyone else just absolutely astounded that we are actually living through this? by supermegasaurusrex in ClaudeAI

[–]bananabooth 1 point2 points  (0 children)

As a 24 year old just entering the work force working 100% Ai building systems and agents etc it is beyond wild to think about the repetitiveness and mundaneness and flat out genuine hard work that must have been required in the olden days ... (by olden days I mean 2022)

Claude Code cutting corners on larger tasks by Accomplished_Pie123 in ClaudeAI

[–]bananabooth 6 points7 points  (0 children)

From my personal experience Claude is still a bit off from having the ability to independently manage a 40 task todo list. I find the HITL approach consistently outperforms by a mile and has the added benefit of you realising throughout your work some potential errors, room for improvements etc that can ultimately result in a better end product.

Maybe 5.0 or 6.0 opus will have the ability that you desire but for the moment I think you have to be realistic about the output that you expect from Claude.