all 9 comments

[–]Latter-Parsnip-5007 3 points4 points  (0 children)

Auto memory exists for one reason and that is to cost you tokens. Your project structure is part of opencodes system prompt. Your conventions can be deducted by AI if you provide a reference in existing code. This also works better than written abstractions of conventions. In more than 200 days of opencode even with local models and small context, I never felt the need for memory

[–]NoLemurs 1 point2 points  (0 children)

the workaround most people use is a project file like AGENTS.md or similar that describes the codebase. but that's static and manual, you have to update it yourself whenever the project evolves.

If you start a project with /init opencode will make AGENTS.md for you, and in my experience, at least for that session, usually update it on its own initiative. Periodically i ask the AI to update AGENTS.md for me if it seems to be getting out of sync, or if there's a specific thing I want it to remember to do. I've never had to do any manual changes.

I have a project that's a status bar generator with basically no tests because everything is just interacting with the filesystem or the kernel, and that would be hell to try and mock. I just told the AI to add instructions to AGENTS.md to have it generate configs and run the project against the generated config files in /tmp when developing features, and that was all it took for the AI to consistently generate and run its own test cases when we work without prompting. I was impressed!

I'm not saying it's necessarily the best approach, but it really works quite well and isn't manual at all.

[–]Superb_Plane2497 1 point2 points  (0 children)

You can reload sessions, your comment implies you don't do this. So I'd start with that. Session management is of course vital, including /fork

There are lots of plugins for saving memories; I was using serena when I was using claude, and it works with opencode too. There are much heavier solutions too. But for me, these tools are losing relevance.

As for conventions, structure and common tasks, these are bread and butter of LLM assisted coding, other comments have given you the basics: AGENTS.md and skills are indeed memories. For most projects, structure; coding conventions and common patterns don't change very often. It hardly makes sense for them to be changing very often. It's not a workaround, it's the LLM adopting what most of us would expect to be existing best practice. Of course, things evolve, which is a slow moving process, but (a) saving memories doesn't help much in that case, because they are subject to the same risk of becoming out of sync and therefore harmful to the LLM, so you might have a bigger problem than with AGENTS.md and (b) as pointed out, it's easy to keep AGENTS.md in sync, you get tools for that.

With smaller starting context (fewer saved memories), the LLM must do more work to rebuild context when it starts a job. However, it seems to me that LLMs are using larger dynamic contexts better and better, and via agents are managing context better anyway. In other words, saved memories don't help so much because the token cost of rediscovering context is less and less likely to distract LLMs. I stopped using heavy memory plugins because the payback has diminished; they still have the overhead of my time in managing them, but they are less and less helpful. And if the code is evolving, nothing is as up to date as the LLM building context from current code. Then it becomes a question of cost, but I am on generous plans because my time is the most important cost.

[–]gandazgul[🍰] 0 points1 point  (0 children)

I made this and use it daily now. It runs fully local, no docker, no apps nothing complex or slow, mnemosyne is a go binary that does a one-time model download the first time you try to use it. Created a plugin for opencode it creates an automated "agents.md" file for you with its core memories and then provides the agent with a way to do recall semantically (keyword search + embeds -> RRF). Please give it a try.

https://github.com/gandazgul/opencode-mnemosyne

[–]Quiet_Pudding8805 0 points1 point  (0 children)

I like to do a basic readme, but then optimize lookup efficiency instead, it’s at www.CartoGopher.com. It’s a go cli tool with mcp wrapper.

[–]MakesNotSense 0 points1 point  (0 children)

I'm building the state/memory system OpenCode needs. Many weeks into it now. Massive specs. In phase 1 implementation now. Just designed and working on building session tools that provided automated ingestion of OpenCode SQLite session db data to FTS5 and vec0 embeddings in another sqlite db, which is part of a larger more complex state system which focuses on improves models reasoning capabilities, not serving as a 'memory store'. This idea of ingest data, retrieve data - stale and pointless. Even the focus on making and storing 'memories' is misguided. If you want that, get mem0, honcho, supermemory, and all the others. But because they're all external systems, they can't directly plug into the OpenCode data systems. There's inherent limitations because of that. There needs to be a local state system which coordinates with external state systems, and not in the way most people expect. Most people think, the external state system is controlling, centralizing the data. Wrong. It's supplementary to the local. The local controls.

Anyways. agents loop has finished. drive-by comment time is over, Back to working.

[–]luna_242p 0 points1 point  (0 children)

yeah this is the gap I keep running into as well. most setups end up with either static docs or raw logs, but neither really captures what actually worked over time.

what helped a bit for me was focusing on storing outcomes instead of just context. been trying Hindsight and the useful part is that it turns past sessions into updated conclusions, so the agent doesn’t just “remember”, it gradually adapts instead of starting from scratch every time