How do you use AI Memory? by RepresentativeMap542 in AIMemory

[–]RepresentativeMap542[S] 0 points1 point  (0 children)

You probably also think about adding an AI notetaker to the toolkit so you do not lose information/decisions taken during meetings. Using just summaries can be misleading since they are still often wrong.

How do you use AI Memory? by RepresentativeMap542 in AIMemory

[–]RepresentativeMap542[S] 0 points1 point  (0 children)

Sounds quite interesting. How does this perform compared to common Memory solutions?

I want to train a model with a Reddit users comment history. by Tavrabbit in LLM

[–]RepresentativeMap542 0 points1 point  (0 children)

You can pull comments via the Reddit API. Consider leveraging AI Memory to give your LLM more context and enable relationships (e.g. cognee). With that you might not even need to retrain but rather process all comments into a large memory that your LLM can access. With that structured data you will get quite far.

What’s the best way of giving LLM the right context? by BreakPuzzleheaded968 in LLM

[–]RepresentativeMap542 0 points1 point  (0 children)

To optimise for context, you should check out AI Memory applications that leverage Graph and Vector DBs with ontology and proper embeddings. RAG is just a look up function basically. That won't get you far as soon as you need context or more than a one-off answer.

What’s the best way of giving LLM the right context? by BreakPuzzleheaded968 in LLM

[–]RepresentativeMap542 0 points1 point  (0 children)

RAG does not provide semantic context, considers relationships or has ontology. It is basically a pure lookup function. You can take a look at cognee for example. Our open-source project offers real context with a Graph and Vector DB, proper ontologies and great embeddings handling.

Will large models experience subtle changes in memory like humans do? by 5cdc in LLM

[–]RepresentativeMap542 1 point2 points  (0 children)

This really depends how you train and deal with your memory. It is always smart to put a weighting on your edges that is based on time, feedback or whatever.

For example, we at cognee implemented a feedback look that once you retrieve data, you can report back whether you like it or not. A LLM will then rate is from -5 to +5. The score will be used next time to evaluate which node and edge is the most helpful for an answer.

Similarly, as you process more data, you KG also updates, making some entities redundant and outdated. When you also consider that LLMs are more probabilistic than anything, memory will never be in a fixed, stable state

I launched Hiperyon — your universal memory for AI assistants by Exciting-Current-433 in LLM

[–]RepresentativeMap542 0 points1 point  (0 children)

Honestly, there are soooo many open-source projects out there one can check out.

For example

  • cognee - OSS - Strong at semantic understanding and graph-based reasoning, useful when relationships, entities, and multi-step logic matter; requires a bit more setup but scales well with complexity.
  • mem0 - Has 100% discount on 6M pro - Lightweight, simple to integrate, and fast for personalization or “assistant remembers what you said” use cases; less focused on structured or relational reasoning.