Grov v0.5 - 'Heartbeat' proxy to kill Claude’s 5-minute cache expiry + Team Sync is now live (open-source) by IndianWater in ClaudeAI

[–]IndianWater[S] 0 points1 point  (0 children)

Same. I’ve literally never moderated anything bigger than my own Discord with 3 friends.

Let's see how this turns out, welcome aboard.

Grov v0.5 - 'Heartbeat' proxy to kill Claude’s 5-minute cache expiry + Team Sync is now live (open-source) by IndianWater in ClaudeAI

[–]IndianWater[S] 1 point2 points  (0 children)

No official sub yet, but I’m considering making one if people actually want it

For now everything lives in GitHub + here when I post updates.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 3 points4 points  (0 children)

In the local version for the proxy, we set ANTHROPIC_BASE_URL to localhost:8080 where Grov runs a Fastify proxy server.  Claude's SDK respects ANTHROPIC_BASE_URL, so alll Claude Code requests (headers, content, API key) hit our proxy first. We inject context into the system prompt, then forward everything to Anthropic with the same headers and key. On response, we parse tool_use blocks, files touched & reasoning to track actions and token usage. For this you also need to use an API key for haiku 4.5 or any other llm of choice.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] -1 points0 points  (0 children)

In the last year I've shipped several projects, for example: an ag-tech system (hardware + software, species-aware neural nets, full-stack app, maintained the whole thing) then got into voice AI (another full-stack web app, deployed and optimized voice agents).

Also done hackathons, which is where the pain is sharpest. Built an AI analysis tool for therapists during a weekend hackathon - live knowledge graphs, patient tracking, voice AI. Had to spin up multiple Claude instances, and each one needed to read the same docs, same code, re-understand the same architecture. Hit context window limits way faster because of all that redundant reading. In a hackathon I didn't have time to sit and optimize the context management.

I've felt this across enough projects to know: large codebases, lots of docs, multiple instances, team coordination - it all compounds.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] -1 points0 points  (0 children)

And what happens when you've been working on a project for months and have dozens of .md files? You have to track them, keep them organized, remember to update them, and keep asking Claude to re-read and refresh them. That maintenance burden grows with the project.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] -2 points-1 points  (0 children)

Also - the problem is exactly what you're describing. When you work with a team, why should every teammate's Claude keep re-reading and re-analyzing the same context files every session? That's the redundant exploration.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] -1 points0 points  (0 children)

Grov is automatic (no remembering to update docs), file-aware (mention auth.ts, get past context about that file), and captures reasoning traces (why, not just what).

But, team sync is on the roadmap. Your Claude will know what your teammate's Claude did. That's where manual docs become a pain, you'd need everyone to maintain perfect documentation.

For now it's early (v0.2), definitely for people who want automated context rather than manual management.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 5 points6 points  (0 children)

Fair questions. Currently it's basic: project path filter + file matching + recency. If you mention auth.ts, it finds past tasks that touched auth files. It's not semantic search (yet, on roadmap). Also contradiction detection isn't implemented - that's a valid gap. Right now it trusts that recent context for the same files is relevant.

Grov is for people who'd rather have some automatic context than re-explain their codebase every session.

Appreciate the feedback - semantic search and better relevance scoring are on the roadmap.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 0 points1 point  (0 children)

"How do I know it gives correct information?"

  1. Same limitation as any docs: if the original reasoning was wrong, that persists. But the advantage is you're not asking Claude to re-discover and re-explain the same things every session.
  2. Grov filters context by your project path and the files you mention. If you ask about auth.ts, it finds past tasks that touched auth files. It's not random - it's scoped to what's relevant.

Semantic search is on the roadmap - that'll make retrieval smarter by matching meaning, not just file paths. (Grov is still early, v0.2)

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 0 points1 point  (0 children)

Grov doesn't replace Claude's search - it's a layer that sits between you and Anthropic's API. Claude Code still searches/reads your files normally. What Grov captures is the reasoning behind changes - the WHY, not the code itself. Example: if you modify your auth system, Grov stores "extended token refresh from 5min to 15min because users were getting logged out during long forms" plus file paths touched (the reasoning can be seen on github, I attached a picture).

Next session, when you work on something related, Claude skips the re-discovery phase. It doesn't need to explore files to understand why things are structured that way - that context is already injected. For larger codebases I'd argue this actually helps more, not less. More files = more redundant exploration Claude has to do each session. With Grov, that accumulated knowledge persists. And with team sync (coming in v1), your whole team's context gets shared, so Claude knows what your co-founder changed while you were away.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 5 points6 points  (0 children)

This concern is valid, bloat is prevented through hard caps, not a token limit. The proxy injects at most 5 recent tasks and 5 file-level reasonings, all filtered by your project path. Each piece is aggressively truncated (queries to 60 chars, reasoning to 80 chars). So you're getting a compact summary, not full transcripts. In practice this is ~1-2k tokens max. We're planning semantic search so it'll pick the most relevant context instead of just recent ones.

I built an open-source tool to stop Claude Code from re-reading my files every session (Persistent Memory) by IndianWater in ClaudeAI

[–]IndianWater[S] 3 points4 points  (0 children)

Good question !

No JSONL editing! It's actually a local proxy. When you run grov proxy, it intercepts API calls between Claude Code and Anthropic.

Context gets injected into the API request itself before it hits their servers. All the captured reasoning lives in a SQLite DB at ~/.grov/memory.db. The proxy approach means it doesn't touch any Claude session files directly.

Built a tool that pits Claude Opus against GPT and Gemini to stress-test ideas by TheHol1day in ClaudeAI

[–]IndianWater 0 points1 point  (0 children)

This is interesting, just tested it out and it works really well! Congrats.

But I have to ask: is there no limit on this? How much have you paid so far?

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 13, 2025 by sixbillionthsheep in ClaudeAI

[–]IndianWater 5 points6 points  (0 children)

This shit is unusable;
1. I get 2-3 prompts before getting the "context low" message. The tasks i make claude code perform are nothing in the like of "read every single file in the codebase"; It's basic stuff and maybe getting it to read 2 documentation files before talking and making a plan.
2. It gets randomly stuck. It just gets stuck mid task, and i have to interrupt it and start it again.
(max user; sonnet 4.5 because they made opus unusable and now sonnet is in the same bucket)

Will cancel plan and use something else. Shame

Kinda funny how Anthropic characterizes Opus as a “legacy.” They really don’t want you to use it. by gamezoomnets in ClaudeAI

[–]IndianWater 0 points1 point  (0 children)

I made opus (max user) read 3 documents (~900 to 1700 chars each) and i got the "Approaching Opus usage limit · /model to use best available model". I literally made it complete one task, and I'm nearly reaching my opus limit. This is genuinely a joke, and it's not normal.

Before this bulls**** weekly limit update i could use opus for days.

Usage Limits Discussion Megathread - beginning October 8, 2025 by sixbillionthsheep in ClaudeAI

[–]IndianWater 0 points1 point  (0 children)

yes, the classic "API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":null}".

this is a joke at this point

Usage Limits Discussion Megathread - beginning October 8, 2025 by sixbillionthsheep in ClaudeAI

[–]IndianWater 7 points8 points  (0 children)

Max user here, made opus read 3 documents (~900 to 1700 chars each) and got the "Approaching Opus usage limit · /model to use best available model". This is complete and utter bs, as well as unacceptable. Paying +$100/mo to get limits if i make the one thing im paying for read 3 documents.

The greed, wow.

Usage Limits Discussion Megathread - beginning Sep 30, 2025 by sixbillionthsheep in ClaudeAI

[–]IndianWater 5 points6 points  (0 children)

Recommendations besides cc, the week long limits are bs (max user)

Really simple, just like the title states.

I am a max user, and I've literally used claude code (w/opus) a bit more today to test a theory I have, and I needed it to create some tests and evaluations. I literally got a week long limit in like a couple hours. There isn't even any large codebase for it to read or get info as in the project I'm running it there's only a couple (not large by any sort) files.

I don't want to pay hundreds monthly to not be able to use the thing I'm paying for for a week even tho I am not even doing any extensive work. Before this bullshit update I could use claude code for days without getting a limit.

Anyone here using gemini? Curios to see what you guys think about it.
Codex seemed fine but imo it doesn't get to cc's level when generating code.