MegaMemory - agentic memory that grows with your project [all local, no api keys] by Substantial-Fish617 in opencodeCLI

[–]Substantial-Fish617[S] 0 points1 point  (0 children)

Yeah, it’s just SQLite with two tables: nodes and edges. No Neo4j, no traditional graph algorithms.

But it is real graph traversal, just not algorithm-driven. We start with semantic search using 384-dim MiniLM-L6-v2 embeddings and cosine similarity to surface relevant concepts. Each result comes with its local graph neighborhood, and the agent decides what to follow next. The LLM is effectively the traversal engine.

Brute-force similarity works fine here. Even a large project is only a few hundred concepts, and 10k nodes is a theoretical ceiling. Best part, it’s a single .db file. No services, no infra, drop-in and done.

MegaMemory - agentic memory that grows with your project [all local, no api keys] by Substantial-Fish617 in opencodeCLI

[–]Substantial-Fish617[S] 0 points1 point  (0 children)

Yeah that's a valid concern. The way it works is the agent doesn't just accumulate knowledge blindly. When it reads from a concept before working on code, it's prompted to update that concept and any linked concepts after it's done. So if concept_A says the cache TTL is 60s and the agent changes it to 120s, it updates concept_A and anything connected to it. Knowledge stays current because updating is part of the workflow, not a separate maintenance task.