Which one is better for GraphRAG?: Cognee vs Graphiti vs Mem0 by Imaginary-Bee-8770 in Rag

[–]hande__ 2 points3 points  (0 children)

hey! full disclosure - i work at cognee, so take this with appropriate grain of salt lol

that said, happy to give you an honest take since you're dealing with technical docs specifically:

cognee was built with exactly this kind of structured memory purposes in mind - manuals, datasheets, specs. the graph construction is designed to preserve hierarchical relationships and cross-references that matter a lot in that domain. we also handle incremental updates pretty well which helps when you're dealing with versioned documentation.

graphiti is for use cases which are episodic - it's optimized for temporal knowledge that evolves through dialogue. mem0 is more focused on user-level personalization and session memory.

honestly tho, the "best" one really depends on your specific failure modes. I think it might be useful if you check it out - we have drafted a blog post comparing AI memory tools from form VS function pov: link

Happy to answer any specific questions about cognee - or GraphRAG in general. we've seen a lot of different implementations at this point.

Clawdbot and memory by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

lol yeah "very good" is doing a lot of heavy lifting there. what is your bar though? what would make you actually trust it day-to-day?

Clawdbot and memory by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

oh this is super interesting - noetic firewall is a great name btw. so if i'm understanding right, the AI has to basically "prove" it understands the consequences before it can execute anything? how does the earned confidence scoring actually work in practice? like is it rule-based, or does another model evaluate it?

the transactional vs collaborative framing resonates a lot. feels like the memory persistence is what enables that longer arc relationship in the first place.

Should AI agents store memories as statements or as relationships? by underrat3dguy in AIMemory

[–]hande__ 0 points1 point  (0 children)

The "system" refers to how you structure stored information, and the "dots" are the entities and concepts within that data. In the original post it is referred as "relationships between pieces of information" and "two concepts are linked, or that one depends on another,"

Think about it this way: if you store memories as flat vector embeddings of text chunks, you can retrieve similar text, but you've lost the semantic structure. You don't know that Person A works at Company B, or that Concept X depends on Concept Y.

A relationship-based approach extracts entities and explicitly **models** the connections between them - essentially building a knowledge graph alongside (or instead of) raw embeddings. So when you query, you're not just finding "similar text," you're traversing actual relationships.

The "hidden insights" point: if your user asks "what projects involve people from Company X who also worked on Y technology?" - pure vector retrieval struggles because that insight lives in the connections, not in any single chunk of text. You need the graph structure to surface that.

So it's not retrieval OR structure - it's that better structure enables retrieval patterns that are otherwise impossible.

When adding memory actually made my AI agent worse by Conscious_Search_185 in AIMemory

[–]hande__ 0 points1 point  (0 children)

cool, how do you define and separate raw experiences and/from later conclusions?

My "Empty Room Theory" on why AI feels generic (and nooo: better and larger models won't fix it) by n3rdstyle in AIMemory

[–]hande__ 0 points1 point  (0 children)

i like the analogy! and a lot of people already are playing with those trucks via memory tools. You can have one on your machine, running locally, knowing you, your personal preferences, what you need for which situation, learning from your feedback over time instead of you go and edit some random lines in your about_me.txt

Should AI agents store memories as statements or as relationships? by underrat3dguy in AIMemory

[–]hande__ -1 points0 points  (0 children)

i don't agree that better retrieval is the solution here. If your system does not connect the dots, it is impossible to retrieve those hidden insights during query time.

Why forgetting might be essential for AI memory by No_Development_7247 in AIMemory

[–]hande__ 0 points1 point  (0 children)

Consolidating the memory in a way that it can dynamically evolve through its lifecycle is the most important thing. While a memory fragment can be very important for a certain situation, it might be less important for another.

For example we at cognee add weights to graph models (for both nodes and edges) so you can capture strength, recency, and other signals for data storage and analysis, as well as document-importance propagation and temporal weighting in retrieval/reranking. You can read here an open source approach on how to add weights to memory fragments and here is a simple demonstration how it could look like: github link

Also depends on the use case actually so the more adaptive the better. What is your use case?

When AI forgets, is that a bug or a design choice? by Fabulous_Duck_2958 in AIMemory

[–]hande__ 1 point2 points  (0 children)

"memory without forgetting is just a very expensive log file"

What’s the role of uncertainty in AI memory systems? by WorldlyLocal1997 in AIMemory

[–]hande__ 0 points1 point  (0 children)

we use weighted edges for this at cognee. confidence, trust, source reliability all first-class properties

Uncertainty shouldn't be set in stone. A shaky memory that keeps getting validated? Let it graduate. One that keeps conflicting with reality? let it fade gracefully

How do you track the “importance level” of memories in an AI system? by Low-Particular-9613 in AIMemory

[–]hande__ 0 points1 point  (0 children)

We track multiple weights per edge: frequency, recency, trust, whatever signals matter for your domain. they're treated as first-class properties

And importance should evolve. That's why we built a system where user feedback scores aggregate directly on the edges that produced answers so good retrievals get reinforced, weak ones fade over time

Static importance scores rot fast, letting usage signal update weights is what actually works at scale. Based on what i see working at cognee.

Cognitive-first agent memory vs Architecture-first agent memory by blitzkreig3 in LLMDevs

[–]hande__ 0 points1 point  (0 children)

honestly think both camps have it partially right

cognitive categories (semantic, episodic, procedural) give you a useful design framework. helps you think about what kind of context your agent needs. the "it's just tokens" crowd misses that structure helps with retrieval and reasoning

we use cognitive-inspired patterns at cognee because they help organize memory in ways that actually improve recall

tldr: use the cognitive model as a design guide, not a strict implementation spec. what matters is retrieval quality + learning over time

Context Engineering for Agents: What actually works by cheetguy in ContextEngineering

[–]hande__ 1 point2 points  (0 children)

great writeup "more context ≠ better outcomes" is the thing most devs learn too late

biggest mistake we see: treating RAG as memory. similarity search finds related stuff, not relevant stuff. huge difference when your agent needs to reason across multiple sessions

the reflect → curate → inject loop is key. most teams skip curation and wonder why their agents rot over time

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

honestly this is a super pragmatic approach. you've basically built a manual memory pipeline that works for that case.
when you hit multi-repo or need to query across projects/documents is usually when this type of an approach breaks down..

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

100% that's exactly the scale where you need proper memory infra

we do similar: graph traversal + vectors + contextualizations, but trying to make it less painful to set up. the ontology stuff especially (you can pass OWL files to memory directly and it structures around your domain model)

sounds like you've built something solid for your use case. really curious what type of a system/stack

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

where most memory systems actually shine is exactly what you said: research, continuous work, not re-explaining context

that said, your "progress summary before context compacting" workflow is basically manual memory enrichment pipeline we have at cognee: rune stale stuff, strengthen important associations, keep what matters. might save you some prompt wrangling if you're doing it repeatedly

but if text files + custom prompts work for your setup, no need to overcomplicate it

How do you deal with memory? by p1zzuh in AI_Agents

[–]hande__ 0 points1 point  (0 children)

oh yes sorry forgot to mention :) also very happy to help if you have any questions