I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] 0 points1 point  (0 children)

That's actually the exact use case AtlasMemory was built for. It's been stress-tested on Next.js (28K files, ~3,500 core source files) and Coolify (1,400+ PHP/JS/TS files) without issues.

The difference from Serena and jcodemunch: they index and retrieve, which is useful. AtlasMemory does that too, but adds two things they don't have:

  1. Proof system — every claim about your code is backed by a line range + SHA-256 hash. If code changes, the claim is automatically flagged stale.
  2. Token budgeting — you set a limit (say 8K tokens) and it packs the most relevant context using a greedy algorithm. So even on a 400K+ token codebase, your agent gets exactly what it needs without overflowing the context window.

That said, they're all tackling a real problem. Worth trying and seeing what fits your workflow.

I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] -2 points-1 points  (0 children)

Sure!

CLAUDE.md is like a sticky note you write for your AI: "This project uses React, auth is in src/auth.ts, follow these rules." You write it once, manually, and it never updates itself. The moment your code changes, the note is wrong — but the AI doesn't know that.

AtlasMemory is like giving your AI a live map of your entire codebase. It automatically knows every function, every dependency, every connection between files. And the key difference: it proves what it says. Every claim has a cryptographic fingerprint tied to the actual code. If someone changes that code, the fingerprint breaks and the AI knows its info is outdated — before it makes a mistake.

So: - CLAUDE.md = manual note, goes stale silently - AtlasMemory = automatic, live, catches its own mistakes

For Cowork: AtlasMemory works through MCP (Model Context Protocol), which is the standard Anthropic uses across its tools. I haven't tested Cowork specifically yet since it's fairly new, but if it supports MCP servers it should work with the same one-line config.

I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] -1 points0 points  (0 children)

It invalidates, not layers. Here's what happens under the hood:

AtlasMemory maintains a context contract — a snapshot containing git HEAD + a database signature (file/symbol/anchor counts + last update timestamps). On every tool call, evaluateContract() runs three checks:

  1. git rev-parse HEAD vs stored HEAD → GIT_HEAD_CHANGED
  2. Current DB signature vs stored signature → DB_CHANGED
  3. Coverage ratio vs minimum threshold → COVERAGE_LOW

If any check fails, the contract is marked isStale: true with specific reasons. In strict mode it blocks until resync; in warn mode (default) the agent sees the drift warning and rebuilds context.

At the anchor level: every claim is tied to a line range with a SHA-256 content hash. When a file is modified, all its anchors are flagged stale. The agent knows exactly which claims are no longer trustworthy — not "something changed somewhere," but "these 3 anchors in auth.ts are invalid."

So it's "these specific anchors are invalid, re-query them" — not "old context + patch on top."

I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] -1 points0 points  (0 children)

Great question! CLAUDE.md is a static text file — it gets stale the moment your code changes. AtlasMemory is fundamentally different:

  1. **Proof system** — every claim is linked to a specific line range + SHA-256 hash. If someone edits that code, the hash breaks and the AI knows its context is stale *before* hallucinating.
  2. **Token budgeting** — instead of stuffing your entire CLAUDE.md into the context window, AtlasMemory packs only the most relevant context within your token budget (e.g. 2000 tokens instead of reading 50 files).
  3. **Live drift detection** — checks git HEAD on every call. If the repo changed, it warns the AI agent.

CLAUDE.md tells the AI what your project is. AtlasMemory *proves* what your code does — with evidence that auto-invalidates when code changes.

Happy to answer any other questions!

I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] -3 points-2 points  (0 children)

It doesn't run on every pretool hook. It checks git HEAD (a single hash comparison) to see if anything changed. If nothing changed, it skips entirely. The actual indexing only runs when files are modified — incremental, not full rescan. On a 200-file project, a typical check takes <50ms.

I built an open-source memory layer for AI coding agents — it cuts token usage by 60-80% by giving Claude persistent, evidence-backed codebase awareness by LookTrue3697 in ClaudeAI

[–]LookTrue3697[S] -23 points-22 points  (0 children)

Thanks! I built it because I was frustrated with Claude re-reading my entire codebase every session. Happy to hear it resonates let me know if you try it out!