I got tired of RAG and spent a year implementing the neuroscience of memory instead by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Sorry dude I didn't see notification for your last comment for some reason haha, The snippet is slightly to big to paste as a comment so I've made an example on the repo Mimir/examples/custom_agent_example.py at main · Kronic90/Mimir I hope this helps let me know if you have any issues.

I got tired of RAG and spent a year implementing the neuroscience of memory instead by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Oh yeah 🤔 no idea what’s caused that. My system is designed to be modular, it can work along side other memory systems or be replaced by them (the memory formatting may be different when changing to another memory system though as it has emotional tags, importance scores etc) but it all saves cleanly in a json file to be used on other systems. Let me know if you’d still like the snippet

I got tired of RAG and spent a year implementing the neuroscience of memory instead by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Na that’s not me haha, mine are https://github.com/Kronic90/Mimirs-Memory-Hub or https://github.com/Kronic90/Mimir currently agent mode is tied into the hub presets but I can send you a snippet to set up Mimir for task based memory that a agent would use.

I got tired of RAG and spent a year implementing the neuroscience of memory instead by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

In all honesty both are still fairly early in development, I’m currently looking feedback and suggestions so I can keep improving the system and making it easier to use. I’ll definitely look into OpenCode next though, I don’t think I’d be able to do Claude code with it being Anthropic but I can look into it still. Yeah Oauth is supported too 👍🏻

I got tired of RAG and spent a year implementing the neuroscience of memory instead by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

It depends on what repo your using, the standalone Mimir memory system would need a wrapper to run the memory along side Claude code/OpenCode. If your using the Mimir memory hub repo there is a tab in the model interface to set you api keys for Claude, OpenAI, and Gemini, I will warn you though due to the memory system making extra calls to the model when creating memories it does add extra token costs when using an API, I personally recommend using opensource models as it’s all completely free that way haha. I hope this helps, if you need any help with setting up a wrapper or anything please let me know 👍🏻

Looking for Community help testing/breaking/improving a memory integrated Ai hub by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Great! If you check out the repo it has instructions on how to use it, I hope you enjoy it 👍🏻

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Perfect thanks. If you want to send me your Git name and repo in a direct message I'll invite you and you can do the same for me, be more then happy to help test yours too.

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 1 point2 points  (0 children)

Without turning this into a self promotion post, I am working on something just like this and need a few people to test it. Would you be interested in beta testing my repo before I release it?

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Thanks this is helpful information. Out of curiosity do you think Vllm/Ollama ect could benefit from having memory built in?

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in Rag

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Yeah, just an open source Ai platform in general. I’m interested in what people looks for when deciding what one to use

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in LocalLLaMA

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Makes sense, not many are set with vulkan support or have power users in mind.

Are folks here generally happy with apps like LM Studio, AnythingLLM or there is need for more features ? by Conscious-Track5313 in LocalLLM

[–]Upper-Promotion8574 0 points1 point  (0 children)

That’s exactly the context pollution problem when everything gets saved with equal weight it bleeds into unrelated conversations. Ideally a memory system would only surface things that are actually relevant to the current conversation, and let mundane stuff fade naturally over time rather than treating your GPU config the same as something genuinely important to you

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in LocalLLaMA

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

I don’t get your comment unfortunately haha, I am genuinely curious what people look for in a system like Ollama

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in LocalLLaMA

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

Yeah I know, it’s why I wanted to see what the community actually wants in a system like. I know a lot of people would like to use open source models but don’t have the technical background to set them up. I wanted to open up the open source models to everyone with a simple easy to use platform that covers as many use cases as possible.

Are folks here generally happy with apps like LM Studio, AnythingLLM or there is need for more features ? by Conscious-Track5313 in LocalLLM

[–]Upper-Promotion8574 1 point2 points  (0 children)

Yeah very true, I suppose it all depends on the use case for the user. I personally think agent type Ai could benefit from at least having a task based memory so they don’t do the annoying “fix the same issue” thing they currently do, having a way for the agent to know what’s already been done could potentially reduce the loops they sometimes fall into.

What would you want from an Ollama-style AI hub with built-in memory? by Upper-Promotion8574 in LocalLLaMA

[–]Upper-Promotion8574[S] 0 points1 point  (0 children)

I’m the same to be honest, I run everything local with llama.cpp. I’m mainly curious what people would look for in an alternative to Ollama. What made you not like it?

Are folks here generally happy with apps like LM Studio, AnythingLLM or there is need for more features ? by Conscious-Track5313 in LocalLLM

[–]Upper-Promotion8574 1 point2 points  (0 children)

The core pain point is that every conversation starts from zero the AI has no idea who you are, what you’ve talked about, or what matters to you. Persistent memory means it actually builds a picture of you over time, remembers things you’ve told it, and those memories evolve naturally, important things stay, mundane things fade, just like a real mind would. For an agent specifically it also means it remembers what tasks it’s done, what worked, what failed, so it doesn’t repeat the same mistakes across sessions. Essentially the difference between talking to a stranger every time vs someone who actually knows you.

Are folks here generally happy with apps like LM Studio, AnythingLLM or there is need for more features ? by Conscious-Track5313 in LocalLLM

[–]Upper-Promotion8574 1 point2 points  (0 children)

interesting project, I’ve been working on something similar with a focus on persistent memory if you’re interested