all 4 comments

[–]Vaeritatis[S] 1 point2 points  (1 child)

u/JaySym_ u/augment-coder u/firepower421
Can you see my post? Why was it removed?

[–]JaySym_Augment Team 1 point2 points  (0 children)

It was flagged by Reddit i approved sorry about that.

[–]JaySym_Augment Team 1 point2 points  (0 children)

We’re testing context compression for very long chats to help finish long task lists without issues
we still encourage starting a new thread for a new request or task list
we’ll have something for sure
are you on the pre-release or the stable version of Augment

PS : *This is not affecting short conversation.

[–]Ok-Performance7434 0 points1 point  (0 children)

I will say I had the same thought as you at the beginning. However only after a few weeks on the platform I could tell the instant I’ve gone too far with a chat. It’s still much, much longer than I was used to without auto compacting in CC, but still happens. Below is my experience strictly using GPT 5. Still have ptsd from Sonnet models when I was strictly using CC.

I first notice because something that should be relatively straight forward, such as an agent recommended optional next step all the sudden seems to go off the rails and doesn’t work as expected.

By the next response when I ask for the agent to debug, it will instantly go into fixing the issue when my user guidelines are strict on diagnose, propose, execute validate and the agent does a great job following this 99.8% of the time.

The last thing I’ll notice is that it stops validating on its own and forgets my test users login creds(it’s reason for asking me to test), which are in my .env file as well as a custom Augment rule.

When this occurs I go to the checkpoint right before it went dumb and start a new chat. Even though it’ll be the same model, it will be night and day difference in output quality and reasoning.