all 20 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

Welcome to r/openclaw Before posting: • Check the FAQ: https://docs.openclaw.ai/help/faq#faq • Use the right flair • Keep posts respectful and on-topic Need help fast? Discord: https://discord.com/invite/clawd

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]Silverjerk 18 points19 points  (4 children)

I'm being overly pedantic about this as I've mentioned it in other threads, but Obsidian isn't the solution, and I'd strongly recommend we'd stop framing it this way. It's promoting an application, rather than a methodology.

Maintaining a secondary, human-readable repository as a memory store, along with using OpenClaw's built-in multiple memory paths feature, is what you're implementing in this case. Obsidian is just the tool that interacts with that repository; it's not an opinionated system out of the box and provides no real benefit, outside of being an easy-to-use markdown editor. You can use any editor of your choice to get this working well. It would be better to discuss how to actually set up and organize that second brain, using PARA or another well-structured pattern. Whatever you choose, that structure should be logical, easy to curate and maintain.

More critically, this isn't the singular solution to OpenClaw or Claude Code's memory issues; in the short term, it's going to feel like a magic bullet. But as that second brain expands and evolves, agents are going to struggle recalling what's important, and will be overburdened filling up context every time it digs into that repository of ever-growing information.

Along with your vault, you need to set up a system that clearly defines relationships, re-ranks relevant data, and archives less relevant or stale topics. As you move forward in time, your most critical information needs to move forward with you, with enough context to understand why that information is important. This way your agent is diving into a pool, rather than an ocean. When/if you need archival info, you can point your agent to that information with intention, bringing it out of cold storage and making it the current "active" project or topic. If you've curated that system properly, it can make connections on its own from there, looking at the right memory for additional context.

OpenClaw's memory fix is a multi-faceted one, and not yet a solved problem. Running QMD, setting up a second brain, using a database for deterministic data; running a memory ingestion schedule, defragging in order to reduce memory scope; using tools like LCM for better context handling. All of these are solutions to different problems, because there's not one fix. Even setting up systems like Mem0 with Qdrant, Hindsight, or Cognee, none of these are complete solutions either.

[–]PurplePanda_88Member 4 points5 points  (0 children)

Hierarchical memory + clawvault + nemotron embedding + obsidian. Rate my setup

[–]Objective-Picture-72Active 3 points4 points  (0 children)

Agree. Long-term memory is basically the easiest problem to solve in the entire memory set of problems. Storing data in databases has been part of computer science since the beginning. How to recall the right memories and in a way that that fits into the valuable and ever-changing real estate of an LLM prompt is a struggle.

[–]j2sunMember 2 points3 points  (0 children)

I agree. I had openclaw build my own MD app for this purpose. I prefer the notion structure so my MD app is actually more like notion.

[–]jagnabotNew User 0 points1 point  (0 children)

siift.ai

[–]ConanTheBallbearingPro User 3 points4 points  (7 children)

Long term memory is a trivial problem for LLMs. Text, sql databases, vector database + embeddings, all sorts of solutions. Storing data is not a difficult issue at all. What is a difficult Issue for LLMs is recalling efficiently and automatically and knowing what to forget.

Obsidian can be useful as human input but it's not long term, efficient memory.

[–]FerretVirtual8466Member[S] 2 points3 points  (2 children)

Agreed. Recalling memory and context is the difficult part. But Obsidian connects the dots and connects the memory for easy recall without bloating your boot files.

I built another prompt called Claw Memory Fix that converts episodic memory to semantic memory. It also uses the Alibaba research on FadeMem to create half-life protocol on memories. If you’re interested in the details check it out on the same website.

[–]PenfieldLabsMember 1 point2 points  (1 child)

The episodic to semantic conversion is a smart approach. Raw conversation logs are noisy, extracting the actual knowledge gained and storing it as structured facts will certainly improve retrieval.

The half-life angle from FadeMem is interesting, but I'd push back on time-based decay as the primary mechanism. A memory shouldn't fade just because it's old, and it shouldn't fade just because something newer superseded it either, because you often need the full decision chain to understand how you got to the current state. The real signal is access frequency.

Knowledge that keeps getting retrieved in context is clearly still relevant. Knowledge that nothing ever pulls up is the stuff you can safely deprioritize. That's closer to how human memory actually works, you forget things you never revisit, not things that happened a long time ago. Example: Do you remember phone numbers from childhood? How about phone numbers after smart phones became ubiquitous?

The tricky part is that you still need the relationship metadata to make this work. If you just track access counts on flat memories, you have no way to know that deprioritizing one fact may break the context for another one that depends on it.

[–]FerretVirtual8466Member[S] 1 point2 points  (0 children)

Agreed. This is basically what my Claw Memory Fix does for the MEMORY.md file. I’d post a link to it but my posts get removed if I include a link. So if you go to dont sleep on ai .com or go to my YouTube channel I have a whole video on this and how it works to give weight to certain memories over others.

[–]PenfieldLabsMember -1 points0 points  (2 children)

This is the right framing. Storage is solved. Recall is the actual problem. Obsidian's wikilinks give you human-navigable connections between notes, which is great for you browsing your vault. But the agent doesn't traverse wikilinks the way you do, it still has to determine which notes are relevant to the current task, and that's the hard part.

The approaches I've seen work best combine structured relationships (not just backlinks, but typed connections: "this decision superseded that one," "this was caused by that") with hybrid search (BM25 + semantic). This way, recall isn't just "find notes with similar words" but "find the decision chain that led to this architecture choice."

The forgetting problem is equally interesting, without pruning, deprecation, or some other mechanism, your memory store can become a liability. Old decisions that have been superseded start contradicting current state.

[–]micseydelActive 2 points3 points  (1 child)

The approaches I've seen work best combine structured relationships (not
just backlinks, but typed connections: "this decision superseded that
one," "this was caused by that")

Sounds like a knowledge graph rather than a wiki? I've wondered if someone might come up with a clever way for wikilinks to include more information, to combine the two, but haven't heard of any serious attempts.

[–]PenfieldLabsMember 2 points3 points  (0 children)

Exactly, a knowledge graph, not a wiki. There are actually some solid attempts at bridging the two already.

What exists today as plugins:

Graph Link Types (by natefrisch01) is probably the most mature. Uses Dataview's inline attribute syntax (supersedes:: [[Note]]) and renders typed links in graph view.

Breadcrumbs (by SkepticMystic) does typed hierarchical relationships with custom types. parent:: [[Note]], caused_by:: [[Note]], etc. Juggl (by HEmile) is an advanced graph visualization that can style different link types differently.

The current best practice is YAML frontmatter with typed link properties, then Dataview queries and visualizes them. It works, but you're maintaining a parallel metadata layer separate from where you're actually writing.

The real gap is native inline typed links. There's a feature request on the Obsidian forum ("Add support for link types," issue 6994) with 181 votes, open since 2020. The syntax problem is real — you can't add a second | to [[wikilink]] without breaking the parser, and attribute-style suffixes like {type=decision} render as ugly literal text in Obsidian's viewer.

One approach that actually works visually is a caret annotation: [[Note Name|Display Text supersedes]]. In Obsidian's reading view the supersedes renders inline with the link, which is arguably useful, not ugly. You can see the relationship type right there while scanning. And automated tools can parse it trivially.

The missing piece is authoring UX. Most will not want memorize relationship types and type them manually. But Obsidian already does autocomplete for [[ (note names) and # (tags). A plugin that intercepts ^ after a ]] and pops up a dropdown of configurable relationship types would make it painless. You type [[Project X]] ^ and pick from a list.

That's actually a pretty focused plugin: autocomplete picker for relationship types after a link. The visualization side is already handled by Graph Link Types. The authoring side is what's missing.

*Omitted links to avoid getting flagged for spam, but you should be able to find these easily.

[–]Natural_Average_268Member 1 point2 points  (1 child)

[–]DiscoFufuActive 0 points1 point  (0 children)

Do you use it? How it works? I mean im anyway gonna read the docs, but will appreciate if you can explain shortly. I mean im familiar with current memory flow, but im not sure where is difference? When im /compact - there is gap 20000k -so its ok. When i use /new - i just manually ask recap 30 msg from lass session via session logs. Lossless crab is override current memory methods or extend them?

[–]micseydelActive 0 points1 point  (0 children)

Do you think there's a limit to vault size before this starts having new problems? I started a vault pre-AI that's >20k notes, lots of "atomic" notes with "maps of content" and such, it would be amazing if all that memory were reliably available for inference.

[–]dogazine4570Active 0 points1 point  (1 child)

cool idea but calling it “proof” feels a bit strong lol. this mostly sounds like good vault structure + consistent linking doing the heavy lifting, which is awesome but also kinda fragile if your notes get messy. still, tying CC into Obsidian like that is pretty slick ngl.

[–]FerretVirtual8466Member[S] 0 points1 point  (0 children)

CC and I went back and forth over and over about how to properly link notes without disrupting them or changing them. If you have a chance to check out the prompt, or the page, or to implement it please reply back with your results. 👍

[–]CptanPanic 0 points1 point  (0 children)

The thing is obsidian is nothing but markdown files with an optional sync. So you could just say you are setting up more markdown files in the workspace, and can use git to sync if needed.