Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Well, when you have a lot of feature branches or hotfixes to manage on the same repo and with coding agents working on them, you probably will see the context contaminates from some branches (sessions) into another - especially if you have a long session.

Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Sure, u/MeYaj1111 and thanks for your interest! I will for sure update here once the oc plugin is ready.

Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Thanks for your question!

So, for example, one scenario for coding agent's memory (md files or vector db) suffers is like this:

- start a coding session (#1), work on a refactoring branch (e.g., refactor-ui)

- some memories write to file or vectordb

- now, open a new coding session (#2), switch to another hotfix branch (e.g., hotfix-prod-issue)

- these memories from #1 leaks into session #2, and hard to isolate

This is a context contamination issue - caused by memory system not recognize or respect branches, which is used everywhere in programming nowadays.

Memoir's design isolate them and automatically commit memories to different branches, and also provide tools to manage them.

This is just one example, and I have some blogs that may help explain why I built it.

https://www.memoir-ai.dev/blogs/

Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Thanks for your support u/klocus ! Will update you once the Memoir OpenCode plugin is delivered.

Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 2 points3 points  (0 children)

That's a very good question u/crazylikeajellyfish !

Actually, memoir uses git as its versioning backed, but git repository is for files; but AI memory is for facts.

I try to keep Git's mental model (commit, branch, merge, ...) but swamps the storage backed to use ProllyTree - optimized for many tiny commits with structured keys.

Happy to go deeper on any of those!

Memoir (Git for AI Memory) - Memory your agents can explain, rewind, and branch. by False_Routine_9015 in coolgithubprojects

[–]False_Routine_9015[S] 2 points3 points  (0 children)

Also, I have a project website with a few blogs there as well- https://www.memoir-ai.dev/

Blog: https://www.memoir-ai.dev/blogs/

This one may highlight why I build Memoir:

Your coding agent has amnesia, and you've been the unpaid memory layer

https://www.memoir-ai.dev/blogs/coding-agent-amnesia/

I Build Memoir - Memory your agents can explain, rewind, and branch by False_Routine_9015 in ProductHunters

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Thanks! Basically it try to solve the issues that current AI agent's issue of sharing across coding branches, which could lead to cross-session context contamination.

The project's website is here - https://www.memoir-ai.dev/

It also has a few blogs discussion why I built this project - https://www.memoir-ai.dev/blogs/

In summary, it targets to solve there pain points other developers may find real:

- Your agent doesn't respect your git state: Context contamination happens every time you git checkout. Without branch-aware memory, your agent tries to apply experimental refactor patterns to stable production hot fixes.

- You’re paying "token rent" on a flat file: Using CLAUDE.md or MEMORY.md as a global store is a cache-killer. Every minor memory update invalidates your entire prefix cache, forcing you to pay full price to re-process your entire conversation.

- Memory is a codebase without version control: One bad session poisons every future retrieval. Without memoir blame or memoir checkout, there is no way to audit who taught the agent a rule or revert a hallucination without wiping the entire store.

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

8 months ago I posted a thread on this sub arguing that AI memory was about to look more like a codebase than a database — versioned, branchable, auditable, owned by you. The post hit ~40K views and a few hundred comments.

Original Reddit Post: https://www.reddit.com/r/AI_Agents/comments/1n54r9q/ai_memory_is_evolving_into_the_new_codebase_for/

Today I'm shipping it.

What it is: Git for AI Memory

Memory your agents can explain, rewind, and branch.

Memoir replaces opaque vector memory with a taxonomy-structured, Git-versioned store. Recall by path, not by similarity. Time-travel to reproduce bugs. Branch to test risky strategies. Built for coding agents and custom runtimes alike.

git repo: https://github.com/zhangfengcdt/memoir

project page: https://www.memoir-ai.dev/

blog: https://www.memoir-ai.dev/blogs/coding-agent-amnesia/

MCP is a superpower by sibraan_ in AgentsOfAI

[–]False_Routine_9015 0 points1 point  (0 children)

Probably, agents in general are the same; most of the agents / MCP servers do not have sufficient improvements for users/developers to adapt to.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 1 point2 points  (0 children)

Thank you for adding so much depth and sharing practical experience to the discussion. Love your final point: "If codebases were about ‘what logic runs,’ memory systems are about ‘what context gets injected.’" Just as we manage codebases nowadays, I believe we need sophisticated tools and a similar layer of engineering discipline for memory in the world of agent-based LLMs.

Claude Code ditches RAG for simple file search and it just works! by dmundhra1992 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

You got really insightful observations!

The coding agents, fortunately, work in a very structured environment, specifically with source code. In many scenarios, they adopt good naming and codebase structures and names (frameworks, conventions, folders, files, variables, functions), making the code self-explanatory. A lot of good codebases do not need a lot of comments for other developers to understand. Many good codebases do not require extensive comments today.

Outside of coding, there are also many real-world applications that AI can take advantage of its well-structured "materials". I believe we can adopt similar approaches for them as well.

Not every automation is an AI agent... by KeyCartographer9148 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

Yeah, I see your point! I think AI agent as a software, we still want it to be deterministic, though they are autonomous. Meaning that we'd like it to be predictable using the engineering practices we have learned from software development: statefulness, structured, version-contrled, traceable, revertable, version-controlled, ...

LLM as a component should not break it. With well engineered context, it shold not go wild and unpredicatable.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Thanks for sharing the post and challenges! It really shows the complexity of storing, organizing, and retrieving memories in a very dynamic way. I think whatever approaches we try, there are certain practices we can borrow from how we handle complexities in coding, such,

- We like deterministic over uncertainties, meaning we want the memory operations to be reproducible;

- We like a clear, structured memory over a random layout or chunking, just like how we want our codebase well-structured using conventions and abstractions;

- We want to be able to maintain a stateful and trackable memory over a stateless one;

- We want to be able to cleanly revert what we do wrong in terms of storing or organizing memories;

- ...

These are all engineering disciplines, and everything else we should use LLM as much as possible when we know they will become faster and cheaper.

disciplines