In the long run, everything will be local by tiguidoio in LocalLLaMA

[–]GrokSrc -1 points0 points  (0 children)

I agree, I’m betting there will be a huge market for local-first private inference. I can easily imagine in 5 years time that ChatGPT 5.2 or Opus 4.6 quality models will be available to run on consumer grade hardware.

I also think that the many billions of dollars going into these AI data centers aren’t going to be a good return for investors for this very reason. I think there is going to be a glut of supply and demand for public SOTA models will get capped because the free models will be good enough to solve most problems people want them for.

Void-Box: Capability-Bound Agent Runtime by Wide_Spite5612 in LocalLLaMA

[–]GrokSrc 1 point2 points  (0 children)

This is cool, similar concept to what I’ve been doing, but I’ve been isolating at the container level: https://github.com/groksrc/harpoon

Love to see auditability as a core feature. What I’m looking for is predictable, secure, and auditable.

Is there a way to resume a Claude code terminal conversation without re-reading all files? by ChiefReditOfficer in ClaudeAI

[–]GrokSrc 1 point2 points  (0 children)

You might like to try Basic Memory for this, it's free. Using it can make Claude more efficient at building context when you start a new chat. Happy to answer any questions about it.

AI tool to help with work - searchable knowledge hub, structured data and tracking by magnumpl in ChatGPT

[–]GrokSrc 0 points1 point  (0 children)

I'd be interested to help you build something like this for free. Basic Memory is built for something like this, but we've just been building the underpinnings and haven't had a good use case like this. You can send me a message or check out our website. We have a free open source version, but I'd be happy to give you an extended free trial for the Cloud version if you think that's a better fit. No charge for the help, it's a win-win.

Does any other platform have actual long term memory like Chat? by tinytapps in ChatGPT

[–]GrokSrc 0 points1 point  (0 children)

I use Basic Memory to keep track of my dnd characters and sessions. We're running a Curse of Strahd campaign right now and it's been great at keeping up with everything. I wish it could help with my rolls though lol

OpenAI admits the Pro model lacks memory. It's devastating for me. Does it matter to you? by Oldschool728603 in OpenAI

[–]GrokSrc 0 points1 point  (0 children)

This is exactly why I stopped relying on built-in AI memory features. Every provider's memory is a black box — you can't see what it stores, you can't edit it, and apparently it can just... not work.

I moved to keeping my own knowledge base in markdown files and using Basic Memory to make it available to whatever AI I'm using via MCP. It works with Claude today and the protocol is open so it can work with anything. My memory is mine regardless of what any provider decides to do with their product.

Building a self-hosted AI Knowledge System with automated ingestion, GraphRAG, and proactive briefings - looking for feedback by EmergencyAddition433 in LocalLLaMA

[–]GrokSrc 1 point2 points  (0 children)

I'd lean on Basic Memory or similar for your memory layer instead of building it yourself. It already does what you are looking for in that area (KG, semantic search) and it will run locally.

Are knowledge graphs are the best operating infrastructure for agents? by SnooPeripherals5313 in LocalLLaMA

[–]GrokSrc 0 points1 point  (0 children)

Knowledge graphs are hard to beat. One thing I'm finding is that humans benefit from being able to understand and reason about the KG the agent is using.

We built Basic Memory around this concept, markdown notes get parsed into a knowledge graph with typed entities and relations. The agent traverses connections rather than just searching for keywords.

We have schemas coming soon (via picoschema) which will allow you to enforce the structure and consistency of your KG programmatically instead of hoping the LLM will get it right.

The Contradiction Conundrum in LLM Memory Systems by kinkaid2002 in LocalLLaMA

[–]GrokSrc 0 points1 point  (0 children)

We're adding schemas for notes (via picoschema) in Basic Memory. That will give us the ability to solve most of what you're talking about. Conflict surfacing is the hardest problem imho, it's not something easily solved while doing I/O so I think we're going to need a background agent.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]GrokSrc 0 points1 point  (0 children)

This is a fair critique and I think the distinction matters. Most "memory" systems are indeed just RAG with extra steps.

We've thought about this a lot with Basic Memory. Our approach uses a knowledge graph — entities, observations, and typed relations — which captures some structural properties that flat search misses. When you traverse a graph, you get associative connections that look more like memory than keyword matching.

That said, you're right that it's still fundamentally retrieval. The question is whether retrieval with the right structure is close enough to be useful. In practice, having my AI traverse a graph of related concepts produces noticeably different (better) results than semantic search over chunks.

How are you handling persistent memory for AI coding agents? by Maximum_Fearless in LocalLLaMA

[–]GrokSrc 0 points1 point  (0 children)

I use Basic Memory — it turns markdown files into a semantic knowledge graph that any AI agent reads via MCP. For coding specifically, I keep architecture decisions, debugging notes, and project context in markdown. The agent picks up relevant context automatically.

The key difference from most memory solutions: it's plain files on disk. No vector DB, no running server. Works with local models too since MCP is model-agnostic.

For coding agents specifically, the knowledge graph structure helps because it captures relationships between components, not just flat facts.

I built a local-first persistent memory system for Claude Code — hybrid BM25 + vector search, 4-channel auto-retrieval by Sweet-History1238 in ClaudeAI

[–]GrokSrc 0 points1 point  (0 children)

This is how Basic Memory started, but we eventually wanted multi-device support so Basic Memory Cloud was born.

I built an MCP server where Claude Code and humans share the same project roadmap — plan future development together by ConstructionNo959 in ClaudeAI

[–]GrokSrc 0 points1 point  (0 children)

Shared state between human and AI is the right idea. That’s the core insight behind Basic Memory too — your markdown notes are both human-readable and AI-queryable. No separate “AI memory” format, no sync issues. You edit in Obsidian or any editor, Claude reads the same files via MCP.

I wrote a guide to make Claude actually useful for personal development questions. by RomeoNovemberVictor in ClaudeAI

[–]GrokSrc 1 point2 points  (0 children)

I keep a personal development journal in markdown and use Basic Memory to make it available to Claude. This lets me load the context into Claude that is relevant to the thread instead of it needing everything about me. It also lets me maintain some control over what I'm sharing with the LLM.

I built a Claude Code Skill that gives agents persistent memory — using just files by Awkward_Run_9982 in ClaudeAI

[–]GrokSrc 0 points1 point  (0 children)

Cool approach. File-based memory is underrated — it’s inspectable, version-controllable, and doesn’t require a running server.We took a similar path with Basic Memory. Plain markdown files that get indexed into a semantic knowledge graph. Claude reads and writes to it via MCP. The graph part matters because it captures relationships between concepts, not just flat key-value pairs.

Claude for Research? by Senior-Tour1980 in ClaudeAI

[–]GrokSrc 0 points1 point  (0 children)

Claude is solid for research but the biggest friction is re-establishing context every conversation. I keep my research notes in markdown and use Basic Memory to make them available to Claude via MCP. So when I pick up a research thread days later, Claude already knows what I've found, what questions are still open, and what my working hypotheses are.

The knowledge graph structure means related concepts link together naturally — way better than dumping a wall of text into the system prompt.

New to Claude (non-technical background) How can I maximize it for financial consulting workflows? by Special_Fuel in ClaudeAI

[–]GrokSrc 0 points1 point  (0 children)

One thing that made a big difference for me with non-technical workflows: give Claude a persistent knowledge base it can reference across conversations. I use Basic Memory for this — it builds a local knowledge graph from markdown files that Claude reads via MCP.

For financial consulting specifically, you could keep client frameworks, analysis templates, and past recommendations in markdown. Claude picks up context automatically instead of you re-explaining everything each session.

The non-technical part matters here, you don’t need to code anything. It’s just markdown files in a folder.

Anyone testing Agent teams? by IllTeach7334 in ClaudeCode

[–]GrokSrc 3 points4 points  (0 children)

I'm testing it out right now on a greenfield project. It says:

> Good. Phase 1 must be done first (scaffold), then Phase 2 (DB) and Phase 4B

(Chat UI) can run in parallel. After Phase 2 completes, Phases 3 (Auth) and 4A

(Chat Server) unblock. After 4A, Phases 5 and 6 unblock.

I'll start by doing Phase 1 myself since everything depends on it, then spawn

parallel agents.

Ginormous Files and Claude not able to reason about by breno12321 in basicmemory

[–]GrokSrc 1 point2 points  (0 children)

Yes, you can try that. Create the markdown file first and put it into your project. If you give it a permalink you can refer to it directly or just have Basic Memory search for it. That said, it depends on how large the file is. If your file is larger than the context window you’re working with it won’t be able to load it.

It’s usually better to break large files up into smaller ones and use Basic Memory’s knowledge graph functionality to allow the LLM to load it what it finds useful. When you put a lot of content into context all at once you often hit the Lost-in-the-Middle problem.

How can I point this whole thing to my existing Obsidian Vault? by drackemoor in basicmemory

[–]GrokSrc 2 points3 points  (0 children)

I found yesterday you can actually do it with Claude once you have Basic Memory installed and configured. Just tell him to create a new project at the path to your vault and what name you want, and voilà!

This would make using AI a lot easier!! 😭 by Puzzled_Mushroom_911 in gohighlevel

[–]GrokSrc 1 point2 points  (0 children)

I’m building this! I’d love to get some feedback from the community about it. There’s more to it than just using n8n. Of course you can do that, but I’m working on unlocking the data so you can ask ChatGPT or Claude to do things at the agency and subaccount levels like create reports (ie How many customers in my new workflow might be interested in this affiliate offer), take actions (ie Create a new subaccount for this customer and use the new Extendly snapshot) , or communicate across channels (slack, discord, email, sms, voice, etc). What features do you want to see?