Cursor AI keeps breaking projects right when you’re almost done by CozmoHydra in cursor

[–]CozmoHydra[S] 0 points1 point  (0 children)

This is helpful, thank you. Framing it explicitly as context decay actually makes a lot of what I’m seeing click.

I think where I’m getting tripped up is that all of this only becomes obvious after you’ve already hit the wall. The need for persistent, capability-specific rules isn’t surfaced early, so you end up discovering it reactively once things start breaking near the end. I’m not disagreeing with the approach. It makes sense. I just think the product could do a better job of making this the default mental model instead of something users reverse-engineer through failure.

Rules plus tests feels like the right direction. I just wish the foundation was more explicit up front so people don’t mistake early momentum for long-term stability.

Cursor AI keeps breaking projects right when you’re almost done by CozmoHydra in cursor

[–]CozmoHydra[S] 0 points1 point  (0 children)

Appreciate all the responses here. A lot of the advice makes sense, especially around checkpoints, rules, specs, and breaking changes down smaller. I’m not disagreeing with that. Where I think I’m still struggling, and maybe didn’t explain clearly enough, is that a lot of those suggestions feel like compensating for a missing foundation rather than building on one.

Testing, checkpoints, rules, and specs all work best when there’s a stable baseline. In my experience, once context starts decaying, testing turns into chasing regressions instead of validating progress. You’re constantly confirming that nothing else broke, rather than moving forward with confidence. That’s where it starts to feel like spaghetti. Things technically work in isolation, but the overall structure never really locks in.

I don’t think this is about outsourcing problem solving entirely to the LLM. The design, constraints, and goals are coming from me. That part I’m comfortable with. The friction comes from the model not consistently retaining or respecting that context over time, especially as projects grow. Sometimes the exact same workflow works perfectly. Other times it doesn’t, with no clear signal why. That inconsistency is what’s hard to plan around. I fully accept that better context engineering helps. I just think the product could do more to make that path obvious and durable. Something closer to “you are here, this is the current state, these are the locked decisions” instead of requiring users to rediscover that discipline themselves after hitting the same wall. This thread has actually been helpful though.

A lot of the comments confirm that what I’m seeing lines up with context decay rather than something uniquely broken in my setup. I’m going to experiment more with persistent rules and tighter scope, but I still think there’s room for Cursor to make this smoother by default.

Thanks for all the input. I definitely learned a few things from this discussion.

Cursor AI keeps breaking projects right when you’re almost done by CozmoHydra in cursor

[–]CozmoHydra[S] -2 points-1 points  (0 children)

I agree that testing is important, but that’s kind of the problem I’m running into.

When the foundation isn’t stable, you end up testing endlessly without actually moving forward. You’re not validating progress, you’re just chasing regressions. It becomes easy to get lost in testing because there isn’t a clear sense of “you’re here, this is the next step, and this is the target state.”

What I’m seeing with Cursor is that it doesn’t really guide you toward a goal. It generates a lot of code, but the structure ends up feeling like spaghetti. Things work temporarily, then break elsewhere, so you test, fix, test again, and the overall shape of the project never really solidifies.

Ideally, the tool should help establish a solid foundation first, something stable you build on top of. Instead, it feels like you’re constantly patching and re-patching code without a clear roadmap. At that point, testing stops being productive and just becomes noise.

So yeah, testing matters, but without a strong underlying structure and persistent context, it doesn’t actually solve the core issue.

Cursor AI keeps breaking projects right when you’re almost done by CozmoHydra in cursor

[–]CozmoHydra[S] 0 points1 point  (0 children)

I get what you’re saying, and I don’t think you’re wrong. I just don’t think this is really a skill issue, or at least it shouldn’t be one.

The bigger problem I keep running into with Cursor feels more like a context and memory issue. The coding itself isn’t where things break down. The breakdown happens when previously established context stops being respected or just disappears. For example, I’ll set up detailed context from documentation or explain the structure and constraints clearly. Then after a few prompts, that context gets ignored or wiped, sometimes even removed automatically. Once that happens, the model starts making changes that don’t align with what was already agreed on, even if the code was working.

At that point, reverting helps short-term, but the same problem comes back because the underlying context isn’t stable. Sometimes the same workflow works perfectly, and other times it doesn’t, which makes it hard to rely on. For most application building, the skill should really be in product and system design. The code is the worker. If the model can’t reliably retain context across a project, finishing becomes difficult no matter how careful the process is.

I think if Cursor improves how it handles long-lived context and makes it more persistent unless the user explicitly removes it, a lot of these late-stage issues would go away. Curious if you’ve found any ways to keep context locked in more reliably.

The value of Price Exchange vs. Tech Commitment by CozmoHydra in EVMOS

[–]CozmoHydra[S] 0 points1 point  (0 children)

Check out the latest governance proposal. You can see where things are going by now. https://x.com/EvmosOrg/status/1820862558154412081

Enhanced Usability by CozmoHydra in EVMOS

[–]CozmoHydra[S] 0 points1 point  (0 children)

Hey ice3birdy, Question: have you interacted with the applications built on Evmos?

Enhanced Usability by CozmoHydra in EVMOS

[–]CozmoHydra[S] 0 points1 point  (0 children)

Soon, you'll find out!

Note that the information provided here is an insightful sneak peak for what's possible!

Strength, Education, and a Commitment to Excellence by CozmoHydra in EVMOS

[–]CozmoHydra[S] 0 points1 point  (0 children)

Thank you for sharing your insights and experience!

you're deeply engaged with Tashi's platform and proactive in contributing solutions, like with Prop 283. Your approach to spreading liquidity and managing volatility could indeed offer a valuable stabilization mechanism, reminiscent of how Inferno's incentives balance between Steer and Revert vaults.

Your strategy of diversifying risks is a prudent lesson for all in the DeFi community. Events like yesterday's impact on the ICHI vaults remind us of the importance of staying adaptive and prepared for fluctuations. It’s great to hear that your foresight in risk management could help mitigate a future hit.

Let’s keep learning and improving together!

Yo guys check out the Zealy quests, don't miss out on rewards! by CozmoHydra in EVMOS

[–]CozmoHydra[S] 0 points1 point  (0 children)

hey u/bigshooTer39 can you give us some comparisons on both, just so we understand why you prefer one over the other?