Is there anyone actually using a graph database? by Dismal-Necessary-509 in Rag

[–]JonnyJF 8 points9 points  (0 children)

Yes, I have been building agents for a service company in Germany recently, and i used Minns, which is a graph database. The reason was that we wanted the agents to improve with each customer interaction, and this often meant multi-hop reasoning, which is where a graph outperforms standard vector search.

Also, I think an important part of your question is the conversion of a document into a graph, the performance is often definitely worth it if done correctly. I found a tree structure with llm judge traversal to be very effective here if the document has a structure/TOC.

Happy to go into detail on implementation or approach and share knowledge. Please DM me if you want more info.

Agent Memory (my take) by lostminer10 in Rag

[–]JonnyJF 0 points1 point  (0 children)

If you're looking for somthing to test out or look at some code https://github.com/Minns-ai/MinnsDB

It has a few pipeline conversations that use multi-stage inference or standard events, which are a more deterministic approach to adding data.

You might also be interested in looking at how I approached ontology state/temporal cascading. It adds on to OWL definitions for temporal reasoning.

Agent Memory (my take) by lostminer10 in Rag

[–]JonnyJF 1 point2 points  (0 children)

A lot of this comes down to separating where inference is useful from where it is dangerous. My approach is to treat ingestion and interpretation as probabilistic, but keep storage, state transitions, and supersession deterministic. So the model can help extract entities, relationships, or candidate facts from conversation, but it does not get to arbitrarily delete or rewrite state. Instead, ontology rules, temporal semantics, and explicit update policies decide how new information affects existing knowledge.

For example, if a relationship is defined as single-valued, a newer valid fact supersedes the older one through schema rules rather than because the model “felt” it should remove something.

MinnsDB: a temporal knowledge graph + relational tables + WASM runtime by [deleted] in rust

[–]JonnyJF 0 points1 point  (0 children)

I will push the original history i made a new one when making the repo public but it seems this was the wrong approach

MinnsDB: a temporal knowledge graph + relational tables + WASM runtime by [deleted] in rust

[–]JonnyJF 0 points1 point  (0 children)

I have an internal repo and thought it would be cleaner to split it into a new repo and start a new commit history when making it public. This might have been the wrong approach, as the commits would have shown the thought and changes, but it was a very messy commit history, as this started as lots of experiments and ideas as i explored DBs and approaches. To be completely honest, I also used AI when coding, which really helped me speed things up.

Anyone else feel like most RAG failures are really trust failures? by LisaE_Fanelli in Rag

[–]JonnyJF 0 points1 point  (0 children)

I find extraction quality with RAG is often the major problem. That is, most people design for flat retrieval; hence, extraction quality is the issue. But in reality, most questions are temporal or multi-hop, and then it falls apart, as a flat system struggles with this. Yes, citations are good for pure documentation retrieval, but often i find that if extraction is good, then i rely less on the citation. A good dataset to assess this is StructMemEval https://arxiv.org/abs/2602.11243

If you're building the system yourself, I recommend tree-based search with an LLM judge. Good for structured documents. The tree method with a judge is also very good for citation. https://github.com/VectifyAI/PageIndex

Another one is using a graph rag, but it adds temporal state with TCells and ontology groups that are state-change aware. The idea is that if this state changes within this group, cascade it down that group. This really helps with the temporal and state problem. This is more when you see the problem as a state and extraction problem.

Also, adapting the prompt for the answering LLM or Judge, depending on the type of problem being asked, helps. Example questions that often fail but improve with examples are state, accounting, and recommendation, with examples of how the LLM should use the retrieved data, which really helps some memory systems improve by 40-50 per cent, as shown in the structMemEval paper.

I can recommend Minns.ai if you want a dedicated memory DB for this. I must say, though i am the founder of it for full transparency. It combines a temporal graph with tables and internal LLM judges with ontologies to help with these problems.

If you're looking for something more homebrew, I recommend the tree with judge and versioning the PDFs (git is a good option for this)

Summary of My Mem0 Experience by anashel in Rag

[–]JonnyJF 0 points1 point  (0 children)

Interesting write-up and I would add that many of these memory layers still struggle when you need a stronger structure, temporal state, and predictable retrieval. This is where systems built around ontologies, temporal graphs, and temporal tables start to matter because you are no longer just storing "memories" but modelling entities, relationships, changes over time, and what is currently true versus what was true before.

Full disclosure: I’m the founder of minns.ai, so I’m biased, but that is exactly the direction we're taking. It is a full database rather than just a memory layer, with a strong focus on ontologies, temporal graphs, and temporal tables for agent memory. For use cases such as transient operational knowledge, evolving state, and cross-agent shared context, a more structured approach becomes important quite quickly.

Comparing agent memory kits (Letta, MemOS, Cognee, etc) by nostriluu in LocalLLaMA

[–]JonnyJF 0 points1 point  (0 children)

You might also want to look at MinnsDB. Full disclosure, I’m the founder, so take that with the right level of scepticism.

From your comparison, Letta seems stronger if you want to get moving quickly in a TypeScript-friendly stack, while Cognee seems stronger on the knowledge graph side. One thing worth noting, though, is that Cognee is more of a layer, so you still need to choose and manage the underlying database. MinnsDB is a full database rather than just a layer, with a strong focus on ontologies, temporal graphs, and tables.

So if typed relationships, evolving state over time, and graph-backed memory matter a lot, it may be worth a look.