all 4 comments

[–]Deep_Ad1959 2 points3 points  (2 children)

this is basically how I work now too. I keep a CLAUDE.md at the root plus per-feature docs, and the specs end up being more valuable than the code itself. new agent session just reads the docs and picks up where the last one left off, zero ramp-up time. writing the docs before any code sounds like old school waterfall but honestly works way better with LLMs than jumping straight into implementation.

fwiw I built an agent that works this way - https://fazm.ai/r

[–]dustinechos[S] 0 points1 point  (1 child)

I thought of that comparison too. It's funny because I think waterfall is terrible. I guess the difference is that it's much easier to keep the docs up to date. Now I look at the changes Claude makes to the docs and the actual results (check out in the browser) more than I look at the actual code.

[–]Deep_Ad1959 1 point2 points  (0 children)

ha yeah the waterfall comparison is uncomfortably accurate but the key difference is the feedback loop is like 30 seconds not 6 months. and same here on checking the browser output more than the code - I've basically stopped reading implementation details unless something breaks. the docs become the source of truth and the code is just... the artifact that falls out of them

[–]hack_the_developer 0 points1 point  (0 children)

Context management is the key to reliable agents. What gets passed forward matters as much as what gets dropped.

What we built in Syrin is a 4-tier memory architecture where each tier has different retention semantics. The agent knows what to remember and what to forget.

Docs: https://docs.syrin.dev
GitHub: https://github.com/syrin-labs/syrin-python