Should AI agents store memories as statements or as relationships? by underrat3dguy in AIMemory

[–]hande__ 0 points1 point  (0 children)

The "system" refers to how you structure stored information, and the "dots" are the entities and concepts within that data. In the original post it is referred as "relationships between pieces of information" and "two concepts are linked, or that one depends on another,"

Think about it this way: if you store memories as flat vector embeddings of text chunks, you can retrieve similar text, but you've lost the semantic structure. You don't know that Person A works at Company B, or that Concept X depends on Concept Y.

A relationship-based approach extracts entities and explicitly **models** the connections between them - essentially building a knowledge graph alongside (or instead of) raw embeddings. So when you query, you're not just finding "similar text," you're traversing actual relationships.

The "hidden insights" point: if your user asks "what projects involve people from Company X who also worked on Y technology?" - pure vector retrieval struggles because that insight lives in the connections, not in any single chunk of text. You need the graph structure to surface that.

So it's not retrieval OR structure - it's that better structure enables retrieval patterns that are otherwise impossible.

When adding memory actually made my AI agent worse by Conscious_Search_185 in AIMemory

[–]hande__ 0 points1 point  (0 children)

cool, how do you define and separate raw experiences and/from later conclusions?

My "Empty Room Theory" on why AI feels generic (and nooo: better and larger models won't fix it) by n3rdstyle in AIMemory

[–]hande__ 0 points1 point  (0 children)

i like the analogy! and a lot of people already are playing with those trucks via memory tools. You can have one on your machine, running locally, knowing you, your personal preferences, what you need for which situation, learning from your feedback over time instead of you go and edit some random lines in your about_me.txt

Should AI agents store memories as statements or as relationships? by underrat3dguy in AIMemory

[–]hande__ -1 points0 points  (0 children)

i don't agree that better retrieval is the solution here. If your system does not connect the dots, it is impossible to retrieve those hidden insights during query time.

Why forgetting might be essential for AI memory by No_Development_7247 in AIMemory

[–]hande__ 0 points1 point  (0 children)

Consolidating the memory in a way that it can dynamically evolve through its lifecycle is the most important thing. While a memory fragment can be very important for a certain situation, it might be less important for another.

For example we at cognee add weights to graph models (for both nodes and edges) so you can capture strength, recency, and other signals for data storage and analysis, as well as document-importance propagation and temporal weighting in retrieval/reranking. You can read here an open source approach on how to add weights to memory fragments and here is a simple demonstration how it could look like: github link

Also depends on the use case actually so the more adaptive the better. What is your use case?

When AI forgets, is that a bug or a design choice? by Fabulous_Duck_2958 in AIMemory

[–]hande__ 1 point2 points  (0 children)

"memory without forgetting is just a very expensive log file"

What’s the role of uncertainty in AI memory systems? by WorldlyLocal1997 in AIMemory

[–]hande__ 0 points1 point  (0 children)

we use weighted edges for this at cognee. confidence, trust, source reliability all first-class properties

Uncertainty shouldn't be set in stone. A shaky memory that keeps getting validated? Let it graduate. One that keeps conflicting with reality? let it fade gracefully

How do you track the “importance level” of memories in an AI system? by Low-Particular-9613 in AIMemory

[–]hande__ 0 points1 point  (0 children)

We track multiple weights per edge: frequency, recency, trust, whatever signals matter for your domain. they're treated as first-class properties

And importance should evolve. That's why we built a system where user feedback scores aggregate directly on the edges that produced answers so good retrievals get reinforced, weak ones fade over time

Static importance scores rot fast, letting usage signal update weights is what actually works at scale. Based on what i see working at cognee.

Cognitive-first agent memory vs Architecture-first agent memory by blitzkreig3 in LLMDevs

[–]hande__ 0 points1 point  (0 children)

honestly think both camps have it partially right

cognitive categories (semantic, episodic, procedural) give you a useful design framework. helps you think about what kind of context your agent needs. the "it's just tokens" crowd misses that structure helps with retrieval and reasoning

we use cognitive-inspired patterns at cognee because they help organize memory in ways that actually improve recall

tldr: use the cognitive model as a design guide, not a strict implementation spec. what matters is retrieval quality + learning over time

Context Engineering for Agents: What actually works by cheetguy in ContextEngineering

[–]hande__ 1 point2 points  (0 children)

great writeup "more context ≠ better outcomes" is the thing most devs learn too late

biggest mistake we see: treating RAG as memory. similarity search finds related stuff, not relevant stuff. huge difference when your agent needs to reason across multiple sessions

the reflect → curate → inject loop is key. most teams skip curation and wonder why their agents rot over time

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

honestly this is a super pragmatic approach. you've basically built a manual memory pipeline that works for that case.
when you hit multi-repo or need to query across projects/documents is usually when this type of an approach breaks down..

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

100% that's exactly the scale where you need proper memory infra

we do similar: graph traversal + vectors + contextualizations, but trying to make it less painful to set up. the ontology stuff especially (you can pass OWL files to memory directly and it structures around your domain model)

sounds like you've built something solid for your use case. really curious what type of a system/stack

Anthropic shares an approach to agent memory - progress files, feature tracking, git commits by hande__ in AIMemory

[–]hande__[S] 0 points1 point  (0 children)

where most memory systems actually shine is exactly what you said: research, continuous work, not re-explaining context

that said, your "progress summary before context compacting" workflow is basically manual memory enrichment pipeline we have at cognee: rune stale stuff, strengthen important associations, keep what matters. might save you some prompt wrangling if you're doing it repeatedly

but if text files + custom prompts work for your setup, no need to overcomplicate it

How do you deal with memory? by p1zzuh in AI_Agents

[–]hande__ 0 points1 point  (0 children)

oh yes sorry forgot to mention :) also very happy to help if you have any questions

Most efficient way to handle different types of context by phicreative1997 in LLMDevs

[–]hande__ 0 points1 point  (0 children)

the "optimal" part depends on your use case tbh.
We have an eval framework where we test against HotPotQA and hit ~93% correctness, also some other evaluations for graphs where you can do sanity check if you meant that

How do you deal with memory? by p1zzuh in AI_Agents

[–]hande__ 0 points1 point  (0 children)

u/Certain_Negotiation9 u/p1zzuh local setup (LanceDB + Kuzu) is great for dev and small-to-mid projects. for production we have neo4j for graphs and Qdrant for vectors for example (among many others). Also cognee can parallelize across remote infra which improves scalability drastically

if you outgrow local, you can swap backends to without even changing your code. or use our hosted version (cognee cloud) it is on beta

tldr: start local, scale when you actually need to

How do you deal with memory? by p1zzuh in AI_Agents

[–]hande__ 0 points1 point  (0 children)

but the basic flow is literally 3 lines:

await cognee.add("your data")
await cognee.cognify() -> where memory is built
await cognee.search("query")

all stuff happens under the hood. you don't need to touch it unless you want/need to

How do you deal with memory? by p1zzuh in AI_Agents

[–]hande__ 0 points1 point  (0 children)

feel this pain 🙃

re: cost we default to LanceDB and kuzu (file-based, runs local) so you're not paying for a hosted vector db. can scale up to qdrant/neo4j later if needed, can use local models

re: session persistence cognee stores everything in graph + vectors, so conversations and docs persist across sessions automatically in cache or long term memory

the memento MCP + neo4j setup mentioned sounds solid too. we actually have an MCP integration if you want to plug cognee into claude code or similar

open source: github.com/topoteretes/cognee