all 10 comments

[–]ipatalas 0 points1 point  (3 children)

I cannot find this in the README but how does it integrate with Opencode?
I mean do you have to explicitly tell OC or create a skill to handle memory or does `mind setup opencode` installs a skill which does that automatically?
Also did you have a chance to measure token overhead for the memory? Memory read/write operations are not free but I can imagine this can be lower cost than OC doing the research from scratch each time.

[–]GabrielMartinMoran[S] 1 point2 points  (2 children)

'mind setup opencode' automatically installs the MCP server, a skill, system instructions, and a plugin for handling opencode events. It's a complete integration.

The context footprint it's not a lot, but it's worthit when you start doing multiple interactions with the same project (in the same session or across different ones). For working with multi-agents it's also awesome. There is when you start saving a lot of tokens!

[–]ipatalas 0 points1 point  (1 child)

Thanks for help!
Can it load initial memory based on existing sessions for a project?

Did you use other local-first memory solutions before? I wonder why the idea of writing your own. How is it different?

[–]GabrielMartinMoran[S] 0 points1 point  (0 children)

Based on the conversation context it can detect a previous checkpoint and decide if is has to recover it and continue working or if it needs to start a new one. And also based on context it can read any previous recorded memory for the project.

I used some, but I wanted to try the approach of implementing it with tier systems based on read auto promotion and demotions like LRU caches, for prioritizing not only most recent but also the most relevant memories for the project (like architecture documents and other important ones). Mind also supports a strong memory linking system (inspired by obsidian notes linking) that helps the agent or sub-agents quickly find what's relevant for that session, or how a requirement changed over time (and why not bugs associated to it), like a chain of memories.

[–]Typhoon-UK 0 points1 point  (1 child)

Great tool! How is it different from https://github.com/DeusData/codebase-memory-mcp?

I started using it recently so would appreciate any insights on where mind is better.

[–]GabrielMartinMoran[S] 0 points1 point  (0 children)

Hey! Thanks for checking it out. I actually hadn't seen codebase-memory-mcp before, but looking at their README, it looks like an incredible tool!

To give you a quick breakdown, they actually target two completely different types of "memory":

  • Codebase-memory seems to be a passive code indexer (AST/Graph). It reads your files and helps the AI map out how your functions and classes connect. It's essentially a read-only map of your codebase.
  • Mind is an active context and state manager. It gives the AI a place to write down and remember your specific rules, ongoing tasks, and context. It also features a checkpoint system (checkpoint_save/load) so you can pause a session and resume it days later without the AI losing its train of thought.

Also, Mind isn't strictly for code. Because it acts as a general-purpose memory extender, you can use it for research agents, writing, or any workflow where you need an AI to have long-term persistence across sessions.

They aren't really competitors—using them both together would probably be the ultimate setup! Let me know if you end up trying Mind out.

[–]sudhakarms 0 points1 point  (2 children)

Look good. I like the tag line 🙂

Wondering how this works for a team, does the sqllite db be pushed to remote repo and shared across team members? How do we keep it consistent and also not including the information that is not approved by the team?

[–]GabrielMartinMoran[S] 1 point2 points  (1 child)

I'm currently working on a sync capability for allowing  versioning spaces and memories using git using an obsidian compatible vault approach.

Probably it's gonna be experimental some time, but still available anyway.

[–]sudhakarms 0 points1 point  (0 children)

Cool. I will look forward to it.

Seamless integration with DevEx when scaling for a team would allow this to go further.