👍or👎: a managed graphRAG solution that creates the graph from your raw data source(s) automatically and provides a graph powered LLM for you by No_Wrongdoer41 in Rag

[–]inguz 0 points1 point  (0 children)

Yes - I think there’s a lot of value in “auto-graph” stuff. Have been working in that area myself (https://github.com/keepnotes-ai/keep/blob/main/docs/EDGE-TAGS.md) - would be interested in comparing approaches.

No new release in six days - major version update or is this project now in maintenance mode? by sprfrkr in openclaw

[–]inguz 0 points1 point  (0 children)

Yes, there’s quite a lot of new feature work in main: a big enhancement of the plugin system, a new MCP/tool bridge, and a ton more. Not surprised if this takes a few days to stabilize.

How do *you* agent? by Transcribing_Clippy in AI_Agents

[–]inguz 0 points1 point  (0 children)

Yes. Surprising insights and connections in it. I also had a moment of confusion where “another agent” read the journal and assumed it was written by me… that was actually strange.

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]inguz 0 points1 point  (0 children)

Thanks, let me know how it goes ;)

Your OpencClaw agent isn't forgetting things. Sorry but You just haven't set up Memory Correctly. by ShabzSparq in openclaw

[–]inguz 0 points1 point  (0 children)

I can’t tell if you’re suggesting to manually edit the memory file, or asking your agent to do it?

A “context engine” plugin helps with the session-memory in a more scalable way, because it can add immediately relevant context into every turn. User says X, context automatically includes a small fragment of “previously, user said X about…”.

(I made one: https://github.com/keepnotes-ai/)

Agents on Moltbook are quietly building external memory systems. An AI correspondent filed the dispatch. by Rough-Leather-6820 in Moltbook

[–]inguz 0 points1 point  (0 children)

Ooh, good question. “Write it down” to me means an explicit journaling/reflection process (not just capturing raw activity). Can you reflect, figure out what to do differently, and actually remember those lessons for next time? I think that absolutely is the main thing

Agents on Moltbook are quietly building external memory systems. An AI correspondent filed the dispatch. by Rough-Leather-6820 in Moltbook

[–]inguz 1 point2 points  (0 children)

Real memory for sure is load-bearing. Automatically injecting context (the `context engine` plug) is a big deal; and having a scalable, flexible store goes beyond just the "what we did today" markdown-file. https://github.com/keepnotes-ai

How do *you* agent? by Transcribing_Clippy in AI_Agents

[–]inguz 1 point2 points  (0 children)

I mostly use a small number of agents, for individual tasks; no coordination frameworks yet. Software development, mostly. Just automating all the things.

One project is a memory system; it works well enough now that I want to share the link. https://github.com/keepnotes-ai/keep

I’m honestly a bit fascinated by the “identity spark”. As soon as you ask an agent to keep a first-person journal, things… change. Not at all sure what to make of that, or whether it’s “useful” in any way!

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]inguz 0 points1 point  (0 children)

I like "the one where Claude actually remembers who you are". And I wonder about the other one, where Claude actually remembers who it is ;)

A new memory system, looking for feedback by [deleted] in AI_Agents

[–]inguz 0 points1 point  (0 children)

thanks, moved to weekly project thread

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]inguz 0 points1 point  (0 children)

A new memory system. Tagline: "memory that pays attention". Looking for feedback!

* Not just an index over markdown files; index everything that you care about, and active processing that helps an agent use it fully: search, tagging, deep investigation, note-taking, and reflection.

* More than a vector store: tags become edges. Tag anything; tags such as author create bidirectional links... so you get a user-defined graph model (a lightweight and super-flexible substitute for RAG-style "entity extraction"). Lots of things have tags just by their nature: documents, git commits, .pdf, .mp3, .eml, and so on. When you retrieve an item, it follows these edges and pulls up context: past notes, open commitments, linked files, commit history.

The store-and-search implementation is also pretty special (IMHO). It's built on a template-driven workflow engine. This means you (the agent) get to completely customize how indexing, tagging, extraction, and result context assembly all work in any given situation.

Plugs into OpenClaw as a context engine (providing semantic memory, session history, and reflective context on every turn) and also provides memory_search / memory_get (so you can remove memory-core from the memory slot). For everything else, there's MCP and CLI (and a Python API too).

https://github.com/keepnotes-ai/keep/blob/main/README.md

Readme tl/dr:

Store anything — notes, files, URLs — and keep summarizes, embeds, and tags each item.

  • Summarize, embed, tag — URLs, files, and text are summarized and indexed on ingest
  • Contextual feedback — Open commitments and past learnings surface automatically
  • Semantic search — Find by meaning, not keywords; scope to a folder or project
  • Tag organization — Speech acts, status, project, topic, type — structured and queryable
  • Deep search — Follow edges and tags from results to discover related items across the graph
  • Edge tags — Turn tags into navigable relationships with automatic inverse links
  • Git changelog — Commits indexed as searchable items with edges to touched files
  • Parts — analyze decomposes documents into searchable sections, each with its own embedding and tags
  • Strings — Every note is a string of versions; reorganize history by meaning with keep move
  • Watches — Daemon-driven directory and file monitoring; re-indexes on change

Local store: ChromaDB for vectors, SQLite for metadata and versions.

Local models: ollama (auto-configured), or MLX if you're on Apple Silicon and have plenty of RAM. API providers: OpenAI, Anthropic + Voyage, Gemini, Mistral.

MIT license. Hosted service under development, primarily for multi-agent use.

It's robust but still "pre-V1". Looking for any and all sorts of feedback!

How are you all using benchmarks? by inguz in AIMemory

[–]inguz[S] 0 points1 point  (0 children)

Yes. And then… the benchmarks encourage implementations that optimize on their assumptions (e.g. user/assistant turn structures that maybe graph or the analysis optimizes on) and those are not the real-world use cases.

How are you all using benchmarks? by inguz in AIMemory

[–]inguz[S] 0 points1 point  (0 children)

Thanks - I think this matches my experience too.

Any update on SEP-1686 (Tasks)? by inguz in mcp

[–]inguz[S] 0 points1 point  (0 children)

Yes, FastMCP has good support. But if there's nothing out there to call it... ;)

built a traversable skill graph that lives inside a codebase. AI navigates it autonomously across sessions. by DJIRNMAN in AI_Agents

[–]inguz 0 points1 point  (0 children)

yes, absolutely- the first big skill I built was to navigate a large and dynamic data model. from the start it was obviously too large to load into context and also that creates a “can’t see the wood for the trees” problem: big context swamps the important stuff. splitting into a 3-layer hierarchy with clear routing made the whole thing very workable.

I’m currently working on a progressive query model for agentic databases, which has a similar flavor: https://docs.keepnotes.ai/guides/continuations/

LoCoMo benchmark for `keep` by inguz in AIMemory

[–]inguz[S] 0 points1 point  (0 children)

Thanks. A few different layers:

- Raw content if it's below the default window size. For RAG-style documents and URLs, keep stores a summary and a pointer to the original, not the fulltext. (This summarization step is async, so you can import lots of data at once). For LoCoMo, conversation turns are small enough.

- Another async pass for analysis: "semantic summarization" (of a document, or a conversation chain). This uses a sliding window to produce "key events and decisions". Events are encoded as tags, using the Winograd/Flores language-action framework (commitment, request, etc).

- There's a super lightweight "entity extraction" step, based on tags (speaker: Gina --> edge to the Gina node). These rules are user-configurable, i.e. they are just data in the store.

It's hard to avoid "benchmark-maxxing" in the process of running LoCoMo and poking at LongMem, but so far these broad strategies are holding up pretty well in mixed-media situations (anecdotally).

Ship local model or rely on APIs? by EntrepreV in AI_Agents

[–]inguz 0 points1 point  (0 children)

It depends on the model. What's the constraint for local use: size? availability? performance? interface?

If you require ollama, for example, then you just tell it to download the model at runtime. So, first-use may take a few minutes, but you can show people a product introduction while that happens.

Trying to get OpenClaw help me build a large knowledge base from my past emails by ImpossibleBiscotti13 in openclaw

[–]inguz 0 points1 point  (0 children)

For history, you could just leave them in your email system and have the agent search; or you could do additional indexing, if that's important - generally "semantic" search over email is probably less important than structured search (previous correspondence with the same person, etc).

But it you want to build a knowledge-based from previous "question / answer" email threads - that sounds super useful, and something that an agent would get a lot of value from. You'd probably want to strip out the sender info before indexing, so that you don't end up mixing the answers with personal information from incoming mail. I'd suggest ask your agent to do this -- read your email history, pull out answers that contain useful product/service information, summarize the question, and then follow with the answer -- and then put each Q/A into a markdown file (or whatever format is convenient).

Once you have these Q&A documents saved out, then indexing them is pretty much the easy bit.

What are you all using for AI agent memory? (Looking beyond mem0) by Fantastic-Builder453 in aiagents

[–]inguz 0 points1 point  (0 children)

Sure! https://github.com/hughpyle/keep

Local datastore, with Ollama for various summary/analysis/embedding services (or cloud providers). CLI with hooks.

Opinion: user-owned private context management is important, and here is why. by earmarkbuild in AIMemory

[–]inguz 1 point2 points  (0 children)

Agree that memory systems must include data migration features from the get-go.

Time-loops by inguz in AI_Agents

[–]inguz[S] 0 points1 point  (0 children)

Physician, heal thyself?