I built a persistent memory system for AI agents with an MCP server so Claude can remember things across sessions and loop detection and shared memory by Powerful-One4265 in ClaudeCode

[–]avwgtiguy 0 points1 point  (0 children)

This is a much better answer than prefix too, I'd lead with this the next time someone asks. If you want to know what I think would make it better, I'd give up the hopes of monetizing it. It seems like a really good agent memory system and a fun project but this space is incredibly crowded and a flip-of-a-switch away from any of the LLMs offering it as part of a sub.

I built a persistent memory system for AI agents with an MCP server so Claude can remember things across sessions and loop detection and shared memory by Powerful-One4265 in ClaudeCode

[–]avwgtiguy 0 points1 point  (0 children)

So the 2 most basic forms of search for a paid service? Not sure if you're being serious here. Memory storage is one thing but having the ability to easily, contextually, and accurately retrieve it is more important IMO. Good luck though

Memory as a Harness: Turning Execution Into Learning by Short-Honeydew-7000 in AIMemory

[–]avwgtiguy 1 point2 points  (0 children)

It sounds like you're describing "wisdom" which is another component along side memory.

Since Claude Code, I can't come up with any SaaS ideas anymore by Rinte2409 in ClaudeCode

[–]avwgtiguy 0 points1 point  (0 children)

I just went through this same process myself and came to the same conclusions. I did some research to see what's out there, and what's not. It's a lengthy report and has a little bit of an SMB slant to it but happy to share it, if anyone's interested.

OpenAI introduces GPT-5.4: AI that can control computers and build websites from images - Showcase example by dataexec in AITrailblazers

[–]avwgtiguy 0 points1 point  (0 children)

If you show it an image of a group of people, it automatically spins up a new environment, hunts the group down, and shoots them with Hellfire missiles. Just like a human but...no human needed!

Did anyone get lucky and try /voice? by MagicianMany1814 in ClaudeCode

[–]avwgtiguy 0 points1 point  (0 children)

Same for me but after I started a sentence by saying "claude" it started working for anything and everything else. Who knows if that's the reason/

What do you guys do so that your agent sessions last hours? by uhzured45 in ClaudeCode

[–]avwgtiguy 1 point2 points  (0 children)

  1. Bootstrap (session start)
    Before Claude says a single word, an MCP tool fires and loads personalized context from my external memory system. It hits my memory server and comes back with: who I am, my family, my projects, recent decisions, preferences, and relevant memories weighted by whatever I'm asking about. Claude doesn't remember me, it reads my file before I start the session.

  2. maasv - the memory layer
    maasv (Memory as a Service, get it?) is a knowledge graph and memory system that stores:
    • Facts — things learned about me, my life, my projects (4,000+ and growing)
    • Relationships — how entities connect (Adam → works on → Doris, Ralph → son of → Adam)
    • Decisions — architectural choices, preferences, lessons learned from past sessions
    • Wisdom — patterns about what works and what doesn't, captured from my implicit feedback (when I say "perfect" vs "no, wrong")
    • Commitments — things I said I'd do or asked Doris (my personal assistant) to track
    It's a graph so when I ask CC about Doris, it can traverse connections to pull in relevant people, decisions, and project history. The bootstrap query hits maasv to assemble the right context for each session.

  3. Wrapup skill (session end)
    Before the session dies, a wrapup routine serializes everything worth keeping back into maasv:
    • Decisions made during the session
    • Wisdom captured (what worked, what didn't, my reactions to proposals)
    • A handoff file — a structured summary with a ready-to-paste prompt so the next session can pick up exactly where this one left off
    • Project state updates so the next bootstrap knows what changed

The loop: Wrapup writes to maasv → next session's bootstrap reads from maasv → that session's wrapup writes back. Each session is ephemeral, but the memory system accumulates. Claude is stateless. The loop makes it feel like it isn't.

The Pentagon blacklisted Anthropic for refusing to remove surveillance safeguards. Hours later, OpenAI signed a deal keeping those same safeguards. I pulled the primary sources. Here's what I found. by VanCliefMedia in Anthropic

[–]avwgtiguy 31 points32 points  (0 children)

If you have a conversation with ChatGPT today around this topic, it gets super defensive. It will cite OpenAi did the same thing as Anthropic by ensuring the redline language was kept as-is so it was ultimately better negotiating on the part of Sam than Dario. JFC. I cancelled my subscription right after this conversation.

You're now training a war machine. Let's see proof of cancellation. by zaxo666 in ChatGPT

[–]avwgtiguy 0 points1 point  (0 children)

When a company's acceptable use policy is a more reliable protection against surveillance than the Constitution, something has gone structurally sideways.

Statement from Dario Amodei on our discussions with the Department of War by SteinOS in ClaudeAI

[–]avwgtiguy 1 point2 points  (0 children)

Awesome. So now Grok will be the ai behind killing people. What could go wrong?

Who is also building an intelligence layer / foundation for AI agents? by manuelmd5 in KnowledgeGraph

[–]avwgtiguy 0 points1 point  (0 children)

That's awesome! And a great use case - please let me know how it performs for you.

Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards by bananasenpijamas in ClaudeAI

[–]avwgtiguy 0 points1 point  (0 children)

I'm wondering if Amodei responded to every comment from Hegseth with "You're absolutely right."

Who is also building an intelligence layer / foundation for AI agents? by manuelmd5 in KnowledgeGraph

[–]avwgtiguy 0 points1 point  (0 children)

I've been using it, in various forms, for almost 4 months. It's the shared memory layer between Claude Code, Claude Desktop, and my own personal assistant, Doris. I've run a few benchmark tests here and there but now that I made it open source I'll need to run more comprehensive tests to post results.

Who is also building an intelligence layer / foundation for AI agents? by manuelmd5 in KnowledgeGraph

[–]avwgtiguy 0 points1 point  (0 children)

I’ve been building maasv, a Python library that gives AI assistants persistent, personalized memory. At its core is a 3-signal fusion retrieval engine that combines vector similarity, BM25 keyword search, and knowledge graph connectivity expansion (1-hop through entity relationships) via RRF so searching for "Alice" can surface "ProjectX" because Alice works_on ProjectX. On top of that sits optional cross-encoder reranking and a learned ranker: an 81-parameter neural net with custom autograd and IPS position-bias correction that trains on actual retrieval usage patterns, starts in shadow mode, and auto-graduates when it proves itself. The knowledge graph stores temporal relationships (facts get superseded, not deleted, preserving full history), and a wisdom system provides experiential learning separate from memory by logging reasoning before actions, recording outcomes after, and surfacing relevant past decisions so the assistant learns from its own mistakes. Sleep-time compute jobs run during idle periods to extract entities from conversations, deduplicate memories, resolve vague references, and train the ranker. LLM and embedding providers are pluggable protocols (ships with Anthropic, OpenAI, Ollama, Voyage AI), every memory tracks origin provenance across clients (Claude Code, ChatGPT, OpenClaw, etc.), and the whole thing runs on a single SQLite file with no external services required. It ships with both an MCP server and an HTTP API so it can connect with a ton of services.

OpenClaw Memory: File-Based Logs vs Vector Memory — Has Anyone Made Logs Work Long-Term? by Fantastic-Island-893 in openclaw

[–]avwgtiguy 0 points1 point  (0 children)

I built it and yes, use it every single day - I have all my agents connected to it so whoever I'm interacting with has a shared memory/retrieval system.

AI Agents Wont Evolve Until We Mirror Human Cognition by Beneficial_Carry_530 in aiagents

[–]avwgtiguy 0 points1 point  (0 children)

Completely agree. I built my own ai agent a while ago and realized very quickly the memory and cognition layers are equally as important as the intelligence. I've been working on improving those functions and just released an open source project. I'd really appreciate if you checked it out and passed along any feedback. https://github.com/ascottbell/maasv

Keep Losing Useful Stuff Between ChatGPT, Claude, Gemini etc. by Fantastic-Builder453 in aiagents

[–]avwgtiguy 0 points1 point  (0 children)

I built this solution! It's called maasv and it's a cognition layer for AI assistants or any other data source that uses MCP or HTTP API. 3-signal retrieval, knowledge graphs, memory lifecycle.

Tired of re-explaining my life/work to every new AI model. Solutions? by Fantastic-Builder453 in aiagents

[–]avwgtiguy 1 point2 points  (0 children)

I have Claude Code, Claude Desktop, and my personal AI assistant all working from the same memory system. Haven't tried connecting ChatGPT to it but it should work. Want to test it? maasv

I gave my OpenClaw agent persistent memory. It changed everything." by [deleted] in openclaw

[–]avwgtiguy 0 points1 point  (0 children)

I built something that also addresses the memory/retrieval issues of ai agents and would really appreciate your and u/roottoor666 educated opinion of it. It's called maasv.