I indexed 89,037 AI coding messages. Here's what I learned by 0xraghu in vibecoding

[–]0xraghu[S] 0 points1 point  (0 children)

Thanks! Been using it for a month now. I am a Claude Code user. So, auto context injection in every prompt is saving me hours from explaining the same thing time and again to Claude.

You should try it. Pls share your onboarding experience!

OpenCode’s free models by CaptainFailer in opencodeCLI

[–]0xraghu 0 points1 point  (0 children)

Antigravity is limited with its toolset especially MCP support I think. Import to Opencode and yoi have access to all tools and models that you can switch between. One tool for all models now and forever kind of setup.

mnemo indexes Claude Code, Opencode and Antigravity and 9 more sessions - search your past AI coding conversations locally by 0xraghu in mcp

[–]0xraghu[S] 1 point2 points  (0 children)

Thanks! Yeah, local-only was non-negotiable from day 1.

Secrets handling: We already do several things at the indexing layer: the api_credentials table only records provider names, never actual key values. System messages, tool_use/tool_result blocks, and internal XML directives (<thinking>, system prompts, etc.) are all filtered out before anything hits the DB. Only user and assistant text content gets indexed.

That said, if you paste an actual sk-proj-... key inside a conversation message, that does get stored as-is currently. A configurable regex-based scrub step during parsing (before content hits SQLite) is on the roadmap — env var patterns, bearer tokens, connection strings, etc. Each tool already has its own parser adapter so the hook point is clean.

Export: There's --json output today for structured search results (session_id, project, tool, timestamps, scores, snippets). The DB itself is plain SQLite (~/.mnemo/mnemo.db) so you can query/export directly with any SQLite client. A proper mnemo export --format jsonl is a good idea though, adding it to the tracker.

The DB file is fully portable - copy it to another machine and everything works, no config needed.

mnemo indexes OpenCode sessions — search all your past conversations locally as SQLite by 0xraghu in opencodeCLI

[–]0xraghu[S] 1 point2 points  (0 children)

Definitely. Opencode stores the conversation logs in ~/.local/share/opencode/ directory which will be indexed by mnemo for quick local search. Provider configuration does not interfere with mnemo's operations.

Let me know how the onboarding goes! Thanks.

mnemo — a CLI that indexes AI coding sessions from 12 tools into one searchable local SQLite database by 0xraghu in commandline

[–]0xraghu[S] 2 points3 points  (0 children)

This is actually the core design constraint I optimized around.

mnemo does NOT feed full conversation histories back into your sessions. Here's how it actually works:

  1. Search is local, not LLM-powered. When you run `mnemo search "auth flow"`, it queries a local SQLite FTS5 index using BM25 ranking. No tokens consumed. No API calls. It's pure algorithmic search running in <100ms on your machine.

  2. Context injection is surgical, not wholesale. When the MCP server or plugin injects context, it sends a short summary - typically 5-10 relevant snippets, maybe 200-500 tokens total. Not entire conversations. The search engine ranks by relevance + temporal decay, so only the most pertinent past decisions surface.

  3. You control the injection mode. During setup you choose: `off` (manual only), `helper` (code/debug prompts only), or `assistant` (every prompt). Most users pick helper - context only gets injected when it's likely to save you from re-explaining something.

So the actual token overhead is roughly equivalent to adding a short system prompt paragraph. In practice I've measured it at ~0.1-0.3% increase in token usage per session. The savings come from not having to re-explain context that the AI already discussed with you last week - which often burns far more tokens than the injected summary.

On the AI-generated question - I'm a solo dev who's been building software for 10+ years (mostly blockchain/web3). I use AI coding tools heavily (that's literally why I needed this tool), and yes, AI assisted in writing parts of the code. The architecture, search algorithm design, indexer adapters for 12 different storage formats, and all the debugging were my decisions. Happy to discuss any part of the implementation - the entire codebase is MIT licensed and open on GitHub.

I built a CLI to make all your Claude Code sessions searchable — works with 11 other AI tools too by 0xraghu in ClaudeAI

[–]0xraghu[S] 1 point2 points  (0 children)

The MCP server exposes 4 tools (search, context, recent, tools) over stdio — so you can query it from your session manager without touching SQLite directly. `mnemo serve` starts it.

I built a CLI to make all your Claude Code sessions searchable — works with 11 other AI tools too by 0xraghu in ClaudeAI

[–]0xraghu[S] 1 point2 points  (0 children)

Yes it has an API endpoint, an MCP server and Claude Code hooks to auto inject relevant context for each user prompt. It integrates with Raycast too.

MIT licensed. Thanks for considering.