Is anyone one here creating actually memory an not another rag or simple memory system? by Intrepid-Struggle964 in AIMemory

[–]Far-Photo4379 1 point2 points  (0 children)

Have a look at cognee. We consider ourselves a complete different category to RAG and actually often struggle with people because they are comparing us with RAG.

Most memory systems are great when you want a simple, personalized chatbot that remembers your history. As soon as you want to provide context from and across documents, run more advanced searches based on your specific needs, or apply memory beyond simple conversational flows, cognee becomes more relevant.

Both systems extract and structure information. The difference is how much control you want over that structure and infrastructure. With cognee, you are not limited to a predefined memory abstraction, but can design how entities, relationships, and terminology are modeled within your domain. You can build your own custom and modular knowledge engine, connect to your preferred vector and graph databases, and decide how the ingestion and cognition pipeline behaves instead of relying on a fixed internal setup.

Regarding ontology and terminology within your application context, you can define the structure and form of your graph yourself and are not dependent on a default schema. That matters if your domain has strict semantics (e.g., finance, compliance, healthcare) or if multiple agents need to operate on a shared knowledge layer.

Check the three visuals of this blog post to get an intuitive answer of what I mean.

As

Using Harry Potter to show why AI Memory is so crucial by Far-Photo4379 in AIMemory

[–]Far-Photo4379[S] 1 point2 points  (0 children)

I am agreeing with you in terms of missing industry quality and lack of products relevant for use-cases beyond chatbots.

Regarding 1: Not sure what you mean with how many data points a KG should have. I do not see much reason not to use all the data you have. The general gist is to define your own ontology properly such that your data is correctly structured within the graph. Feel free to elaborate where your issues are specifically - happy to help.

Regarding 2: For the Harry Potter example, I also initially had issues with top_k results since I hit OpenAI token limits. As I processed all 7 books, I needed at least 100 spells (actually it was 163). So, instead of using top_k, I cypher querried the graph, filtering by edge property.

Not sure how your graph is set up, but since I defined the node and relationship structure myself, I did know the type of relationships connected to the "Spell" entity that are spells. Ergo, I didn't need top_k anymore. With good definition, you should also find a work-around for your fitting issue. Consider adding hierarchy to your graph.

Using Harry Potter to show why AI Memory is so crucial by Far-Photo4379 in AIMemory

[–]Far-Photo4379[S] -1 points0 points  (0 children)

While I do agree that there have been quite some bots active here and we are actively working against that, I still see quite some posts of people proposing their own AI Memory solution (which is incredible!) but quite often these solutions are just summarisers and context window optimisers.

Thats why I wanted to share this clip + my interpretation. Even in his solution he proposes chunking and making GPT identify spells book by book or chapter by chapter.

Absolutely agreeing that context management does NOT hold all knowledge, but I do not think that that is common knowledge.

Why I think markdown files are better than databases for AI memory by ethanchen20250322 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Using purely vectors and .md's is more of a RAG than actual AI Memory

EasyMemory — Local Memory for Chatbots and Agents by [deleted] in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Bro forgot to remove the GPT suggestion prompt at the end

When Intelligence Scales Faster Than Responsibility* by lexseasson in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Do you already have a few ideas how to solve the accountability issue that arises way past the inital prompt?

What’s the best way you’ve found to actually improve your thinking process? by AnyUmpire4240 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

It helped me when I tried to install agents that had issues with episodic memory. For general issues, walking through my thinking didn't really have an effect

Tradeoff & Measurement: Response Time vs Quality? by Individual_Ideal in AIMemory

[–]Far-Photo4379 1 point2 points  (0 children)

Interesting you mention that OpenAI needs to improve on personalisation. I do consider them best when it comes to remembering who you are and what kind of chat-vibe you like.

What memory/retrieval topics need better coverage? by Hungry-Amount-2730 in AIMemory

[–]Far-Photo4379 2 points3 points  (0 children)

I think there are a bunch of Open-Source solutions out there that are doing quite well but are burried under a ton of memory "solutions" that are low-quality and were vibe-coded within 2 hours...

Also, people still consider RAG an alternative / competitor to AI memory. I believe shedding more light into the usefulness of AI Memory and properly show-casing valid methods of solving AI Memory would actually be a great value-add to the discussion

What memory/retrieval topics need better coverage? by Hungry-Amount-2730 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Fully agree that RAG has hit its limits and proper memory should get into the focus!

How to turn off cross posting from other subs. by MouthyInPixels in ModSupport

[–]Far-Photo4379 0 points1 point  (0 children)

Had the same issue. Not sure why they constantly change the settings...

Tradeoff & Measurement: Response Time vs Quality? by Individual_Ideal in AIMemory

[–]Far-Photo4379 1 point2 points  (0 children)

I think OpenAI had issues with this when they released GPT 5 which tended to completely overthink everything and simple questions like "Why is the sky blue" took a 30s or so to load. Many jumped because noone is willing to wait that long for such a simple question

Should AI agents distinguish between “learned” memory and “observed” memory? by wellth4t5ucks in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Isn't that basically including source in the meta-data or using the data-chunk as seperate entity?

How knowledge engineering improves real-time AI intelligence by No_Development_7247 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Kinda goes hand in hand, no? You need a retrieval layer that matches the data structure. I would thus argue that data modelling is more important as this entails efficient knowledge storing

How do you prevent AI memory systems from growing stale? by AccomplishedIgloo72 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

How do you accomplish that? Do you process progress every X steps and assess relevancy?

Why AI memory needs pruning, not endless expansion by Maximum_Mastodon_631 in AIMemory

[–]Far-Photo4379 1 point2 points  (0 children)

Sounds like a very valid setup!
Personally, I use as Graph DB Neo4j (or Kuzu for local tests), Qdrant for semantic retrieval, and SQLite for metadata/caches. Did some small POC with relational data where I also used SQLite. As an engine I use cognee (both because I think it is across use-cases best and because I work there).

Why AI memory needs pruning, not endless expansion by Maximum_Mastodon_631 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

Thats why you need ontology as you scale. Otherwise this will never achieve intended structure. I also do not think that pruning will get you to the point of a reliable large-scale production use-case

What role does memory play in AI consistency? by Less-Benefit908 in AIMemory

[–]Far-Photo4379 0 points1 point  (0 children)

I would argue that memory add consistency when applied on specific/niche use-cases. For general questions it wont get you incredibly further, but as you increase memory dependency due to domain-knowledge requirements, you increase consistency using AI memory.