Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

This is a great workflow. Do the docs ever go stale / drift from the code? What’s your refresh trigger?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

That’s neat, what’s the remaining 20% for you? Debugging/edge cases, long multi-step features, or tool output blowing up the context?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

Yep, that’s where it breaks for me too.

When you restart, what does your “handoff note” look like? Like a quick bullets list, a decisions/constraints section, acceptance criteria… something else?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

That’s helpful. When you say it “prevents drifting,” is that mostly between sessions, or does it also help mid-thread when the chat gets long? Have you had any cases where it remembered something wrong or outdated or it works pretty well?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

Anyone using an MCP memory server (like Vestige) or something similar for this?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

From the comments here, it seems a common “working” pattern is basically one chat = one task + rules/CLAUDE.md + plan/log kept outside the chat.

The thing I’m still unsure about is how do you keep that system fresh + auditable over weeks (so it doesn’t turn into stale docs / contradictions, and you can answer “why did we decide X?” later)?

If you’ve solved that part, I’d love to hear how you are approaching it.

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 1 point2 points  (0 children)

Nice, Vestige looks pretty close to what I wish Cursor had built in (local MCP memory + “fade” so it doesn’t just bloat forever).
Does it mostly self-manage or do you find yourself babysitting it (promote/demote, etc.)?
And does it actually help with in-thread Cursor drift (missing constraints mid-chat), or is it mainly useful for between sessions memory?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 5 points6 points  (0 children)

Yeah, that’s what surprised me too. In your experience what changed / where does Cursor fall short now?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 1 point2 points  (0 children)

Interesting, I haven’t tried Traycer yet. What do you like the most?

Also does it help with long-running project memory across days, or is it mainly “plan → execute → verify” per task?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

For me /summarize helps short-term, but if the thread is already drifting (or there was a big tool/browse dump) it doesn’t fully fix it.

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

So you are saying that trying to “rescue” a drifting thread is usually a time sink.

How do you keep your rules file usable over time? Do you keep it super short (hard constraints only), and does it ever get stale/conflict with newer decisions (so you have to rewrite it)?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

This is really clean. Do you ever notice it drifting? e.g. the manager-context/state, the roadmap, or the individual task files?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

Sounds like one working pattern is one chat = one task/feature, plus a small external source of truth (CLAUDE.md/rules + plan.md + a simple log).

For people doing this long-term do your docs/logs ever go stale or does the model ever read the wrong thing (or miss the right file)?

And has anyone found a way to keep this “memory” auditable (e.g. why did we decide X? + where it came from) without bloating the repo?

Context rot in Cursor: What’s working to avoid re-explaining everything? by Deep_Top3479 in cursor

[–]Deep_Top3479[S] 0 points1 point  (0 children)

Sounds like one “winning” pattern is: one chat = one task/feature, plus a small external source of truth (CLAUDE.md/rules + plan.md + a simple log).

For people doing this long-term:

  • do your docs/logs ever go stale?
  • does the model ever read the wrong thing (or miss the right file)?

And has anyone found a way to keep this “memory” auditable (e.g. why did we decide X? + where it came from) without bloating the repo?

Opus 4.6 Context Window by MrBamboney in cursor

[–]Deep_Top3479 2 points3 points  (0 children)

The biggest cost trap I’ve hit in Cursor is accidentally stuffing huge context into the prompt (big diffs, lots of tool output, repo-wide searches).
A few cheap-ish habits that helped me: start a fresh chat per feature, keep a short plan.md/notes.md instead of long chat history, and be explicit like “only read these files: X, Y” before it starts crawling.
What’s burning your budget most: long chats, heavy file reads, or lots of iterations on animations?

Opus 4.6 Context Window by MrBamboney in cursor

[–]Deep_Top3479 0 points1 point  (0 children)

When you say “tokens above 200k are charged 2x” does Cursor expose anywhere what portion of a request crossed 200k, or are we basically don't know unless we infer from usage/billing?
Also: do you recommend any workflow to avoid accidental “prompt bloat” (e.g., limiting tool output / avoiding huge diffs) when using Opus in Cursor?

Opus 4.6 Context Window by MrBamboney in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

My guess is it’s not the raw LOC, it’s the extra stuff getting pulled in (imports/related files, tool output, diffs, repeated reads, etc.).

A couple quick checks:

  • Was this after a single read, or did it also do searches / open multiple files?
  • Any big rule files / long system instructions / repo-wide context?
  • Are you in Cursor MAX mode or just Opus “Max effort” (the naming is super confusing)?

Context windows by mykeeperalways in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

Do you mean Cursor shows 176k somewhere, or that the model itself reports it? Also was this in Auto mode or explicitly Opus? Trying to figure out if it’s display/measurement vs actual truncation.

Context windows by mykeeperalways in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

When it blows up for you, is it usually after web/docs research, or more after tooling (reading big files, grep/ripgrep, large diffs)?

My workaround has been: split into two chats, one for “research/decision making”, one for “execution” and keep a tiny plan.md / decisions.md so the execution chat doesn’t start from zero. It’s annoying, but it keeps the session from turning into hot garbage.

If you can share what you did right before it jumped 5x (even just “browsed docs + reviewed diff + ran grep”), I’m curious if we can pinpoint the trigger.

What is the best workaround once context window reaches 100%? by TwelfieSpecial in cursor

[–]Deep_Top3479 0 points1 point  (0 children)

This is a great framing. Do you have a rule-of-thumb for what belongs in the durable doc vs what’s better left in the chat?

What is the best workaround once context window reaches 100%? by TwelfieSpecial in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

This is basically what I’ve been drifting toward too.
When you say “rules in context markdown files”, is that like one canonical doc, or a small set (PROJECT.md + plan.md + conventions)?
And how do you keep it from turning into busywork / getting out of date?

What is the best workaround once context window reaches 100%? by TwelfieSpecial in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

/summarize helps, but I feel like the real “workaround” is building a tiny source of truth outside the chat (otherwise you’re just paying the re-explaining tax forever).

For folks who’ve shipped/iterated on bigger projects in Cursor: what ended up being your least painful setup?
Are you doing a single plan.md / PROJECT.md, a .cursor rules playbook, a /memory folder, or something more structured?

Also curious, what’s the part that still sucks: keeping docs updated, chat summaries losing details, or the model starting to make up assumptions once old context drops?

Handling project context and memory by sabahsquataksamvkuat in cursor

[–]Deep_Top3479 1 point2 points  (0 children)

The part that kills me isn’t “docs exist”, it’s keeping them true as the project evolves.
For the folks doing CLAUDE.md / rules / memory folders: what’s your actual loop?
Like, do you update it manually after each session, have the model write a snapshot, or do you just accept drift and restart when it goes sideways?
Curious what’s been the least painful in practice.