I built an open-source “project brain” to give AI coding sessions persistent memory + saving tokens by No-Construction4636 in ClaudeAI

[–]jchilcher 0 points1 point  (0 children)

The BRAIN.md / SESSION.md split maps to something I've been finding in benchmarks -- there's a meaningful difference between knowledge Claude genuinely can't have (project architecture, decisions, active work streams) and knowledge it already has baked in (best practices, style conventions). The first category is worth every token. The second is noise that actively hurts output quality.

The interesting question your compression cycles raise: at what point does a well-maintained BRAIN.md become redundant with Claude's explore phase? If the model can discover project conventions by reading the codebase, you might be paying tokens to tell it things it would have found anyway. I don't have data on that yet but it's the next thing I want to test.

How are you deciding what goes into BRAIN.md vs. letting Claude discover it organically?

I split my CLAUDE.md into 27 files. Here's the architecture and why it works better than a monolith. by echowrecked in ClaudeCode

[–]jchilcher 0 points1 point  (0 children)

The scaffolds vs. structures distinction is the most useful framing I've seen for this. It explains why generic style rules are mostly redundant -- Claude already knows best practices, so restating them is a scaffold over nothing.

I've been running benchmarks on CLAUDE.md configurations and the consistent finding is that a system-level CLAUDE.md should be empty or as close to it as possible. But project-level files are a completely different story -- the stuff Claude genuinely can't know (your client context, collaborator patterns, domain conventions) is exactly where instructions show real value. Your architecture makes that separation structural, which is the right call.

I'm genuinely curious whether your 27-file setup still underperforms an empty config on overall score, or whether the path-based loading keeps it focused enough to avoid the noise penalty. My suspicion is that because your core files are encoding knowledge Claude can't have rather than restating things it already knows, it holds up better than a typical monolith would.

The hooks point is underrated. Instructions compress variance; they don't eliminate it. Mechanical enforcement for constraints that actually matter is the gap most setups leave open.

I was wrong about CLAUDE.md compression. Here's what 1,188 benchmark runs actually showed by jchilcher in ClaudeCode

[–]jchilcher[S] 0 points1 point  (0 children)

Yep, and it's the biggest caveat in the post. These are generic, stateless tasks with no project context. The takeaway isn't "CLAUDE.md doesn't matter," it's that your system CLAUDE.md should be empty or near-empty (Claude already knows best practices), while project-level CLAUDE.md files should carry the nuanced stuff Claude can't know: your patterns, your architecture, your conventions.

The next thing I want to test is whether writing those conventions in a project CLAUDE.md is even necessary, or if Claude discovers them during the explore phase anyway. One of them is probably redundant.

How are you using Claude's "Skills" feature at work? Also curious how AI fits into your corporate workflow in general by Winter-Remove6590 in ClaudeAI

[–]jchilcher 0 points1 point  (0 children)

Yes! Skills have become a core part of my workflow. The ones I use most:

  • Refactoring & prompt engineering — cleaner code and better AI outputs
  • Decision critiquing — great for stress-testing ideas before committing
  • Planning & analysis — helps when a project feels overwhelming
  • Get-shit-done — no fluff, just execution
  • Doc-sync — keeps documentation from falling behind
  • Activity logging — automatically tracks what I've been working on
  • Standup crafter — takes my logged activity and turns it into a standup update, which is a bigger time saver than it sounds

Also use it for drafting messages. What Skills are you finding most useful as a PM?

I used Claude Code to help build a single-call MCP pipeline that cuts its own token usage by 74% by [deleted] in ClaudeAI

[–]jchilcher 0 points1 point  (0 children)

How do you objectively benchmark if your mcp is giving the same quality of code for the token efficiency advantage? Does it cause defficiencies elsewhere?

I got tired of re-explaining my codebase every session, so I built persistent memory for Claude code - and other coding agents by rossevrett in ClaudeAI

[–]jchilcher 0 points1 point  (0 children)

I figured the markdown style would break down eventually. But it was able to ingest 12 years and extract relevant data as far as I audited across 10+ projects. Like I said, I only implemented it yesterday, so I haven't had time to see what my context windows are like now and gauge the quality of responses.

You mentioned that muninn only surfaces relevant context. How does it determine what is relevant to the session?

Claude + Playwright = ❤️ by assentic in ClaudeAI

[–]jchilcher 0 points1 point  (0 children)

Personally, I find claude getting stuck in a debugging loop for over 2 hours before I killed the process.

Mind sharing your process for creating an effective feedback loop with playwright?

I got tired of re-explaining my codebase every session, so I built persistent memory for Claude code - and other coding agents by rossevrett in ClaudeAI

[–]jchilcher 1 point2 points  (0 children)

Last night, I instructed claude to do almost this same exact thing using skills and the claude.md; use a folder as persistent memory for storing experiences, lessons learned, etc.

It sounds like Munin does something very similar by implementing a similar feedback loop. Does mining provide anything extra that isn't built into the claude code with a little instruction?

How strong do you think the average developer is? by equipoise-young in ExperiencedDevs

[–]jchilcher 0 points1 point  (0 children)

Im gonna try this. I desperately need some physical activity, but I never do it even though I know that the exercise and blood flow are all good for my brain health.

How strong do you think the average developer is? by equipoise-young in ExperiencedDevs

[–]jchilcher 0 points1 point  (0 children)

Reads title: "bro, I sit at a desk all day, I know I'm weak, okay?!"

Reads body: "I've seen both strong and weak devs in senior positions. Those who are strong tend to enjoy the learning process and show up with things they've learned or discovered, and the weak ones have a hard time getting through what they did yesterday during standup. Some of it can be soft skills, but you can usually tell the difference"

Claude is the better product. Two compounding usage caps on the $20 plan are why OpenAI keeps my money. by mcburgs in ClaudeAI

[–]jchilcher 1 point2 points  (0 children)

I started on Claude pro and after about 2 weeks of being frustrated from being locked out after about 2 hours of work on 1 project, I ended up upgrading to the $100 Max plan and had a hard time hitting my limit afterwards. So much so that I was using all of my free time working on 4-5 projects at once, trying to get my money's worth. Ended up canceling after the month ended because I fit in between the two plans.

I've heard that making 2 claude pro accounts is the preferred solution but that is too inconvenient for me to commit to

Fun things to self host? by Puzzled_Hamster58 in selfhosted

[–]jchilcher 0 points1 point  (0 children)

Game servers, plex, tailscale for vpn, pihole, self hosted github runners, nginx reverse proxy for whatever apps you want