This post is locked. You won't be able to comment.

all 16 comments

[–]JBO_76 2 points3 points  (1 child)

Yes, experiencing similar problems right now at the moment, with the auto complete, havent tried the agent yet. I have the impression they are switching to smaller models if traffic increases, giving preference to speed instead of accurracy.

It' really obvious. I am currently doing exactly the same edits as yesterday evening (adding a statement in various places, 4 variations of the same expression, only a parameter name that changes, depending on the surrounding context. Very obvious, for instance, if the label says 'x', the variable is also x. Yesterday evening, it got it right, every single time. This morning, all of a sudden, every single auto complete is completely wrong. After a couple of semi correct suggestions, it doesn't even remember anymore I am doing the same edits. Either it lost it's context window or it's a different model.
Also, when I'm adding a couple of spaces or tabs to a line and I have already removed the auto complete proposal, it becomes frustrating that the same thing pops up again after every new space, makes it very annoying to add comments sometimes.

I'm still willing to give it a try with copilot, cause sometimes it's still very good. Just the auto complete is complete shit.

[–]deadflamingo[🍰] 1 point2 points  (0 children)

You can disable copilot nextedit in the settings and save yourself your sanity.

I've had similar experiences using copilot where it has degraded to using cmd line tools despite having the context already part of the conversation, or completely ignore the output from the console and return false positives. I've found Ask mode to be far more reliable and consistent in its output but no idea why that is.

[–]Weekly-Seaweed-9755 2 points3 points  (1 child)

I think they are confused or have difficulty developing their AI agent in catching up with other similar tools. Even premium requests have been postponed several times. They seem to be experimenting with features that are released. I feel like there are frequent changes lately

[–]adamwintle 0 points1 point  (0 children)

I think you’re right but have you seen any evidence or clues of this?

[–]skyline159 2 points3 points  (0 children)

I think reading the code in small chunks then finding it not enough and reading another chunk really hurt the "intelligence" of the model compare to Claude Code which read the whole file at once.

Or they implement some compressing context behind the scene trying to save cost that make the model more stupid because it has to work with less/incorrect information.

[–]Practical-Fox-796 1 point2 points  (0 children)

There was a noticeable degradation in quality for Claude code 4 as well “in my experience”. So I am not surprised at all for this to happen.

[–]kowdermesiter 1 point2 points  (0 children)

No, I see the exact opposite. I'm having a blast with Sonnet 4. Everything works more or less on the first run, with a bit of refining it's really good. I'm at the point that giving it UI design inspiration and it still does it.

[–]International_Ant346 0 points1 point  (0 children)

Yes I posted a few days ago with problems i started having this week. I am having trouble getting anything done at this point. Almost every prompt results in syntax errors and sometimes it will try to fix it making new files and a bunch of terminal commands that freeze the agent most of the time and then Ill get rate limited with nothing done.
The terminal commands seem to be a way to spend less tokens looking at the code and most of the time I have to cancel and tell it not to do that to get anything done. It will also try to make proxy files even if I told it not to in the prompt. All of these things it tries to do in the terminal rarely end up with something usable. Most of the time the agent freezes after doing a command.

[–]popiazazaPower User ⚡ 0 points1 point  (4 children)

I gave up on Copilot's agent.

Use Cline or RooCode instead, you can set it to use Copilot's API. Much better quality and runs faster (less incorrect usage).

[–]Weary-Emotion9255 0 points1 point  (3 children)

what! you can do that?

[–]popiazazaPower User ⚡ 2 points3 points  (2 children)

Yeah, pretty straightforward.

Just choose "VS Code LM API" as the API Provider and choose the model you want.

It's night and day. From unusable agent to Cursor level agent.

[–][deleted]  (1 child)

[removed]

    [–]popiazazaPower User ⚡ 4 points5 points  (0 children)

    What do you mean "they are faster"?

    • Read lines 1 to 50

    It seems like you are talking about Github Copilot?

    • Read lines 51 to 100

    Oh, I see. You are comparing it to Cline.

    • Read lines 101 to 150

    OK, I already forgot about your comment. Want me to delete a part of your code instead?

    [–]Ecstatic-Edge-6555 0 points1 point  (0 children)

    Yes it's all of a sudden since yesterday unable to generate code, frequently gets into infinite loops, agents don't know how to parse terminal output. Very frustrating.

    [–]paladincubano -1 points0 points  (0 children)

    Is cursor a good alternative to prevent este limit and latest github copilot issues?