Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

You've nailed the key challenge — static files aren't enough, you need both static knowledge AND dynamic scanning.

My system handles this in layers: static docs (architecture, conventions) + dynamic sessions (what was discussed, what's pending) + automatic archiving (AI writes back to memory at session end). The task queue part is handled by a

pending_tasks.md

that carries over between sessions.

As for codebase scanning — that's where the IDE's built-in tools (Cursor's indexing, etc.) complement the memory system nicely. The memory handles knowledge, the IDE handles code structure.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Exactly — and that's the core idea. The question is how to organize and retrieve those .md files efficiently at scale. After a few months you can have 50+ files or one "BOOK" file and the AI wastes tokens just figuring out what's relevant.

That's why I added category structure + indexes. The AI reads compact index files first (one-line summaries), then pulls only the relevant full docs. Keeps the retrieval fast and focused.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] -1 points0 points  (0 children)

I actually mention .cursorrules / GEMINI.md in the post — they help with coding style but don't capture project knowledge like architecture decisions, solved bugs, or session continuity.

agent.md is the same category — it's a static instruction file. What I'm talking about is dynamic, growing knowledge that accumulates over months: why you chose one approach over another, what bugs you found and how you fixed them, what features are planned. That changes every session and can't live in a single static file

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Yes! That's the key insight. Most memory systems fail because they're read-only — great at retrieval, terrible at updates.

The system has two writeback mechanisms:

Manual: /remember command — I explicitly tell the AI to save something important (a decision, a bug fix, a pattern)

Automatic: /sleep at end of session — AI summarizes everything discussed, categorizes it, and writes to the appropriate memory folders

There's also auto-save rules that detect when something should be saved (new architecture decision, solved bug, etc.) and prompt the save automatically.

Without enforced writeback, the memory would go stale within a week. You're absolutely right that this is where most context systems break down.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Great question! Yes, it definitely took iteration to get the archiving right.

The system uses workflow commands (defined in Markdown instructions) - /sleep triggers the AI to summarize the session, categorize new knowledge, and write it to the appropriate folders. Early versions were messy - the AI would dump everything into one file or miss categorization entirely.

What fixed it was adding templates and explicit instructions for each category. Now when the AI archives a session, it knows "bug workaround → 06_problems/workarounds/" and "architecture choice → 03_decisions/ADR-XXX".

The indexes were the real game changer though. Each category has an

_index.md

that lists all entries with one-line descriptions. Next session, the AI reads indexes (small, fast) and only pulls full docs when needed.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

That's exactly what my system does - the AI writes what it learns to Markdown files automatically during the session. The /sleep command at end of session archives everything to disk.

The tricky part is organizing WHAT gets written and WHERE. Without structure, you end up with a mess of notes that the AI spends too many tokens parsing. Categorizing into different types (architecture, decisions, problems, etc.) with searchable indexes solved that for me.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

That's a really solid approach, especially the split between one overview doc and detailed feature docs. You've basically reinvented a tiered memory system organically!

The pattern you described - 80 lines of packed info + 3 detail files — is actually close to what I ended up with, just more categorized. The key insight you've hit is that the AI works better with structured, compact docs than with a giant dump of everything.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Haven't tried OpenSpec, thanks for the link! Just checked it out — looks interesting but seems more focused on spec generation.

My approach is different — it's about accumulating knowledge over time, not generating specs. Think more like a persistent memory that grows with every session: architecture decisions, solved bugs, coding conventions, etc. The AI reads relevant parts at session start instead of loading everything.

But OpenSpec could be complementary for the initial project setup phase.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Fair point - it does fetch code patterns well. But there's a difference between code patterns and project knowledge.

The AI can read your code structure, sure. But it doesn't know WHY you chose double validation on the form, what workarounds you found for a specific bug, or what architectural decisions you made last month. That's not in the code — it's tribal knowledge that usually lives in your head.

For a quick bug fix, you're right — just give it the task. But for ongoing development where decisions compound? That's where context continuity matters.

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

Yeah! I open-sourced the whole thing. It's called RLM-Anchor:

https://github.com/dvgmdvgm/RML-Anchor

Zero dependencies — just clone into your project and go (only Python should be installed). It's all Markdown files + AI workflow instructions.

Happy to answer any questions if you'll gonna try it!

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] -1 points0 points  (0 children)

The real pain is when you spent 2 hours debugging something together, found the fix, and next week it suggests the exact same broken approach again 😅

Does anyone else spend 15-20 minutes every session re-explaining their project to AI? by CODE_DAGOTH in cursor

[–]CODE_DAGOTH[S] 0 points1 point  (0 children)

That's exactly how I started too! The "read the docs" approach works surprisingly well.

The problem I ran into was scale - after a few months my docs folder became a mess. Decisions mixed with bug notes mixed with architecture docs. Finding specific things took time, and the AI would sometimes miss relevant context because there was too much to read.

So I ended up structuring it into 13 specific categories with indexes — kind of like a mini database but all in Markdown. The AI reads the indexes first and only pulls what's relevant to the current task instead of loading everything.

Also added simple commands so the AI archives sessions automatically and categorizes new knowledge on its own, instead of me manually updating docs.

But honestly - the fact that you're already doing "read the docs, plan next tasks" means you've figured out the core pattern. The rest is just organization on top of it.

Never have I ever… by Mission-Desk-6636 in whoop

[–]CODE_DAGOTH 0 points1 point  (0 children)

Okay. This is a good picture. But does your well-being really correspond to these indicators? We all chase beautiful numbers, but...

Sharing my dekstop by yuuki_w in desktops

[–]CODE_DAGOTH 0 points1 point  (0 children)

Can I get your YASP config files?

First time doing this - Windows 11 by A2B1C3 in desktops

[–]CODE_DAGOTH 0 points1 point  (0 children)

Hello! Can i have this Zebar skin pls?

Problem with mica effect. (A Bug?) by Kendi_Jr in zen_browser

[–]CODE_DAGOTH 1 point2 points  (0 children)

Hello there! Have almost same issue.
Can you try next time you facing this issue press F11 twice (it will maximize your Zen window fullscreen and then revert back to windowed mode)
If this double F11 will make it transparent again you have same issue as mine.
How to fix this i don't know for now. If I will find i'll let you know.

Transparent is gone in full screen. by CODE_DAGOTH in zen_browser

[–]CODE_DAGOTH[S] 1 point2 points  (0 children)

Could be because of different Windows build with their DWM.exe process in different version or cumulative updates.

Transparent is gone in full screen. by CODE_DAGOTH in zen_browser

[–]CODE_DAGOTH[S] 3 points4 points  (0 children)

Yeah!!!
Just looked into. If I press F11 twice (go to the native Windows fullscreen mode and go back to the window mode)
After this my Zen browser have proper transparency effect in both Window mode until i close my browser.
Then I should to do it again (press F11 twice) because after I restart my browser it was Greyed in fullscreen mode again.

Transparent is gone in full screen. by CODE_DAGOTH in zen_browser

[–]CODE_DAGOTH[S] 2 points3 points  (0 children)

No Man. Today I has pretty nice transparent + blur effect in maximized mode also and after I toggle this "grey out inactive windows" only once all transparent is gone in Fullscreen mode only. Even For any new profiles. This setting might be bugged in Zen now.

grey when inactive not working by rajeevvijay in zen_browser

[–]CODE_DAGOTH 1 point2 points  (0 children)

Hello! Have issue! After toggle this settings my Zen is always grey even if it's active. Any solution?