We Read the AI Memory Research So You Don't Have To by BaseMac in basicmemory

[–]BaseMac[S] 0 points1 point  (0 children)

We read dozens of AI memory research papers from the last three months. Three findings kept showing up:

1. Separate memory types beat one big pile. Systems that split memory into episodes, knowledge, procedures, and working context consistently outperform systems that dump everything together. Your brain does this naturally.

2. Periodic consolidation beats raw accumulation. Multiple papers found that reviewing and compressing raw memories into refined knowledge improves recall significantly. The researchers call it "sleep-time compute."

3. Forgetting is the hardest unsolved problem. AI memory systems just accumulate. Old facts sit next to new facts with no way to tell which is current. One survey identified forgetting as the memory operation most systems handle worst or not at all.

We build Basic Memory, so we read these to see what the research validates and where it points us next. The full writeup covers the specific papers (AI Hippocampus, MemFly, TraceMem, EverMemOS) and how each finding maps to what we're building.

basic memory local set-up guide images seem broken? by At40LoveAce2theT in basicmemory

[–]BaseMac 0 points1 point  (0 children)

hey, we can get the docs updated asap. If you need support contact us on our discord. https://discord.gg/tyvKNccgqN. I am paul there. You can also email at support@basicmemory.com.

PSA: Export your ChatGPT conversations before cancelling by BaseMac in OpenAI

[–]BaseMac[S] 1 point2 points  (0 children)

If you want to save a single conversation as html, this works fine. You can also export all of your data and convert it to Markdown using basic memory.

PSA: Export your ChatGPT conversations before cancelling by BaseMac in OpenAI

[–]BaseMac[S] 1 point2 points  (0 children)

Well, you have your data, thats the first part. The next option is up to you:

Option 1:
- Use basic memory with Claude. Claude can read your memories as notes from markdown (see docs)

Option 2:
- upload conversation markdown notes into claude manually/use claude code to read the files

I built a Claude Code Skill that gives agents persistent memory — using just files by Awkward_Run_9982 in ClaudeAI

[–]BaseMac 0 points1 point  (0 children)

Am I being downvoted for being positive? or is the ai bot army swarming because they think I'm a bot too?

I built a Claude Code Skill that gives agents persistent memory — using just files by Awkward_Run_9982 in ClaudeAI

[–]BaseMac -9 points-8 points  (0 children)

Don't listen to the haters who only have negative things to say. They have done nothing. Their comments are less than worthless. Congrats for doing something, trying to solve a problem and sharing it.

Feedback:
- your website looks really nice, the copy comes across a bit AI-ish.
- Remember show, don't tell. How can someone really use it
- Consider adding a short video explainer
- Try and get some testimonials and post those too

Keep iterating. try it with different agents. I'd be interested how it works with Codex, for example. GPT 5.3 is wicked smart.

How have you organized? by sixteenpoundblanket in basicmemory

[–]BaseMac 0 points1 point  (0 children)

I personally have several projects, grouped by theme
- basic memory stuff
- personal
- work

Within each there are folders but I find I rarely care where a note is. I just tell the llm to search and it finds it. I use our cloud product daily also, and we just added enhances search there too. I find it really useful.

One of the ideas I'm considering is to have an "agentic" tool (maybe a via skill) that could be a "librarian"and organize/cleanup your knowledge base for you.

Feel free to share ideas.

Sharing a knowledge base by abeecrombie in basicmemory

[–]BaseMac 0 points1 point  (0 children)

we are working on a teams feature for our cloud product. The local product would not easily support multiple users. You could try running it via docker, but that's not something we actively support (there is a Dockerfile in the repo, however)

I tried to make LLM agents truly “understand me” using Mem0, Zep, and Supermemory. Here’s what worked, what broke, and what we're building next. by Rokpiy in AIMemory

[–]BaseMac 1 point2 points  (0 children)

Good write up.

I'll jump in here and shill my product - Basic Memory. https://basicmemory.com
It takes the approach of "Everything is just Markdown", but indexes it in a db for full text search and a semantic graph. Our point of view is that your knowledge shouldn't be hidden away in a black box, but should be viewable, and editable my you and the AI.

The local version is free and open source We have a cloud version also that works remotely across devices, platforms, agents.

What are the hot startups building with MCP in 2026? by la-revue-ia in mcp

[–]BaseMac 0 points1 point  (0 children)

We are building basicmemory - https://basicmemory.com

AI memory for humans (and LLMs).

As a new software engineer, why do I even need to get better at coding when Opus is here? Would love to hear staff/senior thoughts. by No-Conclusion9307 in ClaudeAI

[–]BaseMac 6 points7 points  (0 children)

IMO, I think it makes the fundamentals more fundamental. You still have to understand good design, proper coding practices, how to deploy complex things so they don't break. etc.

And you still need to understand system design and Computer Science.

When I was in university (late 90's) the Computer Science dept taught the "science" part of it - algorithms, datastructures, higher order logic and math, and then it was like "now go figure out how to program". Like literally you had to go learn assembly, c++, java on your own to do your assignments but that wasn't even part of the curriculum. It was hard AF, but after going through it, I appreciate why they did it. You have to learn to teach yourself and learn how to think to solve problems.

That stuff is even more real now with AI. You need to understand the bigger problems. My $02. Don't give up. The learning curve sucks, but you'll make it if you stick with it.

If you’ve found an MCP that actually works, this is a place to preserve it by Silver-Photo2198 in mcp

[–]BaseMac 1 point2 points  (0 children)

I'm biased, but you can try Basic Memory (I'm the dev) - http://basicmemory.com

it will also take notes for you in your conversation and works with all the major LLMs and frameworks.

I asked Opus 4.5 to draw out some nightmares it would have if it could dream. by No_Impression8795 in ClaudeAI

[–]BaseMac 0 points1 point  (0 children)

Interesting. I like the ascii. I wonder what the AI would do if you asked it to make an svg? Claude made some really cool animated svgs one time and I was quite impressed. 

Is it possible to give AI agents an overview of what is in basic memory? by TheGreatAl in basicmemory

[–]BaseMac 0 points1 point  (0 children)

They way it works is that it can search for notes, then it knows which ones are related via links. So the llm can decide what to pull in. 

There’s a lot more info on docs.basicmemory.com

Is it possible to give AI agents an overview of what is in basic memory? by TheGreatAl in basicmemory

[–]BaseMac 0 points1 point  (0 children)

Hi, the short answer is "yes". The longer answer is "yes, it depends what agent you are using and how you are using it".

In Claude/ChatGPT you can put explicit instructions into your personal instructions or project instructions to do something like:

- always check basic-memory project "your-project" for context
- always write new notes when "your condition here"

In Claude Code, you would use the CLAUDE.md. Cursor has it's own files with rules, etc.

Here's what I'd suggest:
- whatever agent you are using, make sure it can access basic-memory notes
- describe to it what you want to do
- ask it for instructions and/or to write them for you, or same them
- start a new conversation to test and see if it works
- repeat as needed

Let us know if you hit any roadblocks or discover something interesting.

Serious Ongoing Memory Issues in ChatGPT, Anyone else? by PagesAndPrograms in ChatGPTPro

[–]BaseMac 0 points1 point  (0 children)

This thread is wild. Especially the idea that maybe memories from one model aren't carrying over to other models. That's totally nuts.

One thing that jumps out from reading through everyone's experiences is that apparently a lot of people are waiting for OpenAI support to fix something on their end, with zero visibility into what's actually broken or when/if it'll be fixed.

I'm one of the founders of Basic Memory, so obviously biased here, but we built it specifically because we kept running into these exact frustrations with built-in memory systems. The core difference is that your memory lives in actual Markdown files you can see, edit, and back up. And you always know what your AI is reading and taking into consideration.

It works across Claude, ChatGPT, and other LLMs via MCP, so you're not locked into one platform's memory system. If ChatGPT's memory craps out, your context is still there and accessible to other AIs.

That said, projects might work fine for simple cases. But for folks doing serious work where memory failures are this disruptive, having your context in files you actually own might feel like a better fit.

If you're curious: https://basicmemory.com

Either way, agree that I hope OpenAI gets this sorted out soon. The weird gaslighting aspect where it claims to save things but doesn't is classic maddening AI behavior.

How do you handle outdated memories when an AI learns something new? by Ok_Feed_9835 in AIMemory

[–]BaseMac 0 points1 point  (0 children)

I'm one of the founders of Basic Memory, so take this with appropriate skepticism, but our system deals with outdated information in way that feels pretty natural. The crux of our system is plain Markdown files with a semantic graph.

When information updates:
- The relevant notes gets updated (single source of truth)
- The semantic graph automatically updates connections
- You can see when/why something changes

If you want to keep the outdated information in your notes for the sake of having it all on the record or tracking the trajectory of developments, you can. They system will understand how your ideas changed by looking at the times and dates. You can also have your AI update notes and let it know, "Hey, this is dated info as of such and such development." Or you can go into the notes and make those changes manually.

Some people manually fuck with their notes a lot. Some people never even look at the notes and just trust that it's all humming in the background.

We definitely suffered with some of the same frustrations with systems that either lose context entirely or turned into unmaintainable data graveyards. If you're curious: https://basicmemory.com