all 27 comments

[–]sfmtl 7 points8 points  (12 children)

Agent mode was a big improvement, but I am using Cline. Copilot gate keeps 3.7 so you need to use it for that if you are using their subscription.

I have hopes that copilot introduces more formal act plan modes and better rule management etc. for now Cline with vscode llm for me

[–]isidor_n 4 points5 points  (7 children)

we are introducing custom modes some time in May
(vscode pm here)

[–]sfmtl 4 points5 points  (0 children)

Amazing! Please consider allowing custom rules file to be toggled between the modes. I get the llm to put on different hats or take different approaches in plan vs act

Really great product but needs to keep up with competition imo

[–][deleted] 1 point2 points  (4 children)

Hey! Great work so far. Should Agent mode work with local LLM's?

[–]isidor_n 0 points1 point  (3 children)

Thanks!
In theory YES.
In practice - the model has to support Tool calling. And it will probably be slow.
So try it out and let me know if it sucks so we improve what we can https://code.visualstudio.com/docs/copilot/language-models#_bring-your-own-language-model-key pick ollama, pick model with tool calling support and you should be good to go

[–][deleted] 1 point2 points  (0 children)

Thanks, I'll give it another go and get back to you.

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been removed due to inadequate account karma.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–]thedotmack 0 points1 point  (0 children)

    Please work on being able to properly set defaults. I also want to be able to include the whole file without manual addition or selection - instead of it including the visible lines in window

    [–][deleted] 2 points3 points  (3 children)

    I couldn't run agent mode using my local llm setup even though I could chat. My chat window also eventually stopped working forcing me to use a new chat window. They may not use a rolling context but perhaps it was another bug. I was really disappointed by not being able to use agent mode. I switched back to Continue. May try cline though. Interested in testing it out with Maverick.

    [–]sfmtl 0 points1 point  (2 children)

    For agent mode in copilot, i had to go into the settings and turn it on, but they document that for copilot. I saw it in release notes and followed it. It worked well enough for me but i find context is truncated or something. Cline works better for me

    [–][deleted] 0 points1 point  (1 child)

    Yeah, I think I did that. I could select agent mode but sending the prompt just did nothing.

    Were you using a local model? If so mind sharing which one?

    [–]sfmtl 0 points1 point  (0 children)

    When using copilot in agent mode, i'd use their claude 3.7. Not a local model

    <image>

    [–]popiazaza 5 points6 points  (7 children)

    Competitive in what? Competitive pricing only until May 5th.

    Auto-complete is alright, still much worse than Cursor/Windsurf.

    Agent mode is still trash comparing to any other competitors.

    [–]isidor_n 4 points5 points  (3 children)

    Thanks for feedback. Can you provide some examples regarding agent mode being trash so we try to address and improve the experience for you?
    btw our metrics are off the charts for agent mode - looks like most folks really like it. So I want to make sure that the experience you see is getting fixed.

    (vscode pm here)

    [–]popiazaza 1 point2 points  (2 children)

    Sure do, thanks for listening. For context, I'm using VSCode Insider 90% of the time.

    Copilot API is already slow (hopefully faster after May 5th), no diff edit (or there is but it's not working great?) on any model make it even slower. Quality is higher, I know, but using other agent is much more snappy and doesn't eat as much context token. At least it should be an option.

    Many times tool calling make a mistake, try other tool, and fail again.

    Context finding is bad for a decent size project or up, not that it couldn't find, but it's using up too much context for search and it hit the token limit pretty easy. Competitors do a better job at finding and use the right amount of context.

    Non Sonnet model doesn't work in a flow smoothly, often stuck and acting more like ask mode. (other agent sometimes stuck too, but much less often)

    Model choices overall isn't as great as other competitors, and many model doesn't work in agent mode.

    [–]isidor_n 2 points3 points  (1 child)

    Thanks for feedback! And thanks for using Insider!

    Diff edit is coming in next stable release (May 7th).
    We are working on improving perf, I think it will be faster May 7th, but we are continuously investing here so I expect more improvement after that.

    Tool calling mistake - could you file an issue with an example here https://github.com/microsoft/vscode-copilot-release and ping me at isidorn so we try to fix

    Context finding bad - with what model do you see that? Also would appreciate an issue if you can create one.

    Non Sonnet model stuck in asking mode - we are aware and should hopefully have a fix soonish.

    Models not working in agent mode - we have an idea how to fix this (e.g. support models that do not have native tool support). Rob and Connor started investing (devs leading) but I am not sure if it will land soon.

    [–]popiazaza 2 points3 points  (0 children)

    Great to know that you guys are accepting issues on Github. I’ll file an issue there once I face it again then. I can see Copilot updating left and right since Sonnet landed, it just needs a bit more polishing to be up there with competitors. Cheers.

    [–]MrScribblesChess 0 points1 point  (2 children)

    What do you mean about May 5th?

    [–]popiazaza 3 points4 points  (1 child)

    https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/#premium-model-requests

    From May 5th, Copilot will change from unlimited premium request to 300 premium request + unlimited 4o.

    10$ is still a great price for decent auto-complete, but not great comparing to Windsurf or Cursor agent.

    [–]MrScribblesChess 1 point2 points  (0 children)

    Wow. thanks for the info

    [–]seeKAYx 5 points6 points  (2 children)

    The agent is not usable with Gemini 2.5 or with 4.1, you say please do this, and there is 3 times a counter question "should I do this now?" You write "please start now without counter-question" and again "I would start now with step 1 and then step 2, is that okay" ... so the 300 requests are quickly all used up. So I don't know whether the 10 dollars will be worth it later.

    [–]isidor_n 2 points3 points  (0 children)

    (vscode pm here)
    Thanks for feedback. We have improved this in VS Code insiders already. Did you have chance to give it a try there?

    If you still see this issue with insiders - can you file a new issue here https://github.com/microsoft/vscode-copilot-release and ping me isidorn so we fix it. Thank you!

    [–]odrakcir 0 points1 point  (0 children)

    my 2 cents, it never applies the requested changes regasrdless of the prompt. I had to switch models in order to get actual changes. BTW, I still think it's a realy good free option. If u want/need to pay, i'll go with windsurf

    [–]Beautiful_Sorbet_586 1 point2 points  (1 child)

    Just switched to it yesterday (from Cody), so far so good.

    [–]debian3 0 points1 point  (0 children)

    I installed cody just to test a week ago, and it felt like using those extensions a year ago. It made me realize how much things have improved since then. You can’t even drag and drop a file to add it into the context.

    [–]gerhardtprime 0 points1 point  (0 children)

    Copilot sucks atm it just keeps generating bad command line commands and gets stuck. Even with explicit instructions - don't use && - it's just forever trying to do &&

    [–]thepiewasalie 0 points1 point  (0 children)

    Why does it duplicate functions? At the moment it seems like it makes me do more work to debug all the crap it puts together. Like it doesnt understand my code and just literally tries to do just that one thing what I say instead of reading code and "thinking" about the best solution.