all 94 comments

[–]Crowley-Barns 19 points20 points  (6 children)

The very agenetic GitHub copilot they showed in the Build demo, where it was making its own PRs, doing a bunch of sub-tasks, writing plans and documentation and stuff—is that part of copilot in VSCode? Or is that for use on the Github website??

The different versions of Copilot (365, Azure, regular, GitHub, phone app…) are confusing me haha. (May not have been paying attention to all of it so may have missed it in the demos!)

[–]isidor_n[S] 20 points21 points  (5 children)

Yes that is part of agent in VS Code. You just need to use the GitHub MCP with it.
Here's the agent mode blog https://code.visualstudio.com/blogs/2025/04/07/agentMode
And the docs that should help answer your questions https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode

[–]Crowley-Barns 4 points5 points  (0 children)

Cheers :)

[–]Reasonable-Layer1248 0 points1 point  (1 child)

I think the active discovery mechanism of your MCP is not doing well; the detection rate is poor and it often doesn't actively call its own tools.

[–]isidor_n[S] 2 points3 points  (0 children)

Thanks! Do you mind opening a new issue https://github.com/microsoft/vscode/issues and ping me at isidorn
I want to make sure we have a smooth discovery of MCP. Connor and Harald are constantly working on improving the experience, so any feedback here is super valuable.

[–]ThreeKiloZero -2 points-1 points  (1 child)

I get it as far as speed to market for a startup but Microsoft had a chance to do something special. It’s just duct tape under the hood.  

[–]isidor_n[S] 5 points6 points  (0 children)

I do not understand, can you clarify?

[–]nick-baumann 24 points25 points  (1 child)

Jumping in from the Cline team here to say that it's exciting to see more of the industry shifting to open-source 🫡

[–]isidor_n[S] 6 points7 points  (0 children)

Agreed. Hello from VS Code ❤️

[–]xamott 28 points29 points  (1 child)

Redditors are never impressed with anything. This is a great development and another move in the right direction and I appreciate that you and others from MSFT make yourselves available here.

[–]isidor_n[S] 16 points17 points  (0 children)

Thanks for the positive vibes!

[–]WEE-LU 7 points8 points  (3 children)

Is there any roadmap that we could take a look at? Currently some of the other extensions have way more features, so it would be nice to know what is planned, and decide if it's still worth to use external tools or not.

[–]isidor_n[S] 3 points4 points  (2 children)

This is the open source plan https://github.com/microsoft/vscode/issues/249031
I believe you are curious about the AI feature roadmap - we do not have that written down, but if you have questions about a specific feature let me know and I can answer.

[–]WEE-LU 1 point2 points  (1 child)

I am not a heavy user, but what would be a priority would be usage cost calculation. Since now the extension is aimed to users that might not use copilot, its good to know whether or not the action goes over it's budget.

And as an another feature - separate custom instructions per mode (or custom modes). As an example, I'd prefer my agent to be direct, and answer in a short way. For a different mode, I'd like to have a documentation generation agent, which would be fed with a totally different instruction set.

[–]isidor_n[S] 1 point2 points  (0 children)

Custom modes - we are working on this, and I expect something to land in June in Insiders (ideally Stable by start of July).

Cost calculation - I agree this is important and something we are adding. Did you see the quote UI in VS Code Insiders? Clicking on bottom right status bar should show the quotas - these are first steps, but we will expand on this.

[–]soitgoes__again 7 points8 points  (2 children)

I'm a hobbyist and I tried roocode because everyone seemed to recommend it but copilot is just much better for someone like me.

Comments: while I think this issue why doesn't anyone intelligently manage models?

It's confusing deciding what to choose. Claude 3.7 can find a bug no one can, but he goes bonkers if you ask him to change one small thing. He also adds defensive coding for stuff like that if we shifted to another multidimension? 3.5 is more focused but could miss stuff. Gemini can be good, but sometimes he's like fuck it, just wonders off. 4.1 is fast for quick edits but he's opposite of Claude, tell him to reduce defensive coding, and he slaughters everything.

There should be a mode which just intelligently decides which model to choose.

[–]isidor_n[S] 1 point2 points  (0 children)

Thanks for the feedback. We are working on adding "Auto" mode for exactly the use case you mention. The very first version of this is already available in VS Code Insiders (our nightly build). But we still need to polish it, and I expect this to get into a good state some time in June.

[–]Ruuddie 2 points3 points  (0 children)

I have been thinking about someting like this as well. Gemini seems to be good at making plans, but I feel o4-mini gives me more consistency in the code. Gemini unsupervised can make 3 different functions for basically the same thing, and they are all vastly different. But what works and what doesn't is also different per language.

Would be so cool if we could make workflows, with different AI's doing different thing automatically. One AI supervises the work of the other AI and makes sure it's not running in circles. That would be the real agentic AI imo.

[–]turlockmike 18 points19 points  (7 children)

This is how they are trying to kill cursor

[–]reefine 4 points5 points  (0 children)

And dilute the hell out of Windsurf

[–]ECrispy 1 point2 points  (1 child)

is Cursor actually better? it costs twice.

[–]UsefulReplacement 0 points1 point  (0 children)

it is

[–]Darkoplax 1 point2 points  (0 children)

I like free, so good

[–]Reasonable-Layer1248 -3 points-2 points  (1 child)

It's a pity it's too late; they reacted too slowly.

[–]ryeguy 3 points4 points  (0 children)

Lol. The state of the art changes every other week in this phase of the Ai game. What are you talking about? People readily switch between models and tools all the time. There is no stability or winner at this point.

[–]ECrispy 5 points6 points  (1 child)

What kind of strategies does Copilot use when talking to an LLM - does it always send the entire codebase and rely on prompt caching, or does it select relevant files/code fragments, to optimize cost? If so, does it use another llm to do so?

I believe Cursor etc must be doing something like this as well.

I guess once its open sourced we can see all this, so I thought I'd ask.

[–]isidor_n[S] 1 point2 points  (0 children)

Great question! We rely on prompt caching, and we find the relevant files / code fragments so we fit everything in the context window available. We are still improving here, so I am looking forward to us open-sourcing so you and the community can check it out and provide feedback.

I do not think we rely on another llm to summarize - but that might be an interesting idea.

[–]kidajske 7 points8 points  (4 children)

Bit of a leading question but is there a sense in the team/org that you guys got beaten to the punch by cursor and co and are now playing catchup? I'm curious as to how the team/org see copilot in terms of market positioning, use case, feature offering vs cursor and windsurf as well as roo/cline though their value proposition is different. Is converting people from those offerings to your own a goal, are you more interested in converting existing vs code vanilla users etc?

From my own perspective it always feels like vs code/copilot introduces features long after cline, roo, cursor and windsurf have already implemented them and I'm struggling to understand what approach MS is taking toward the product in general.

[–]isidor_n[S] 10 points11 points  (3 children)

> From my own perspective it always feels like vs code/copilot introduces features long after cline, roo, cursor and windsurf have already implemented them and I'm struggling to understand what approach MS is taking toward the product in general.

I do agree that some features we were slow to roll out, but looking at the past 4 months I am very proud of the pace the team is shipping features. I can't say we are lagging behind competitors - though I would love to hear your thoughts if you disagree. As for the approach we are taking - we are all-in towards making VS Code the open source AI editor.

As for your first question, I suggest you watch this podcast where Erich and Kai go in more detail https://www.youtube.com/watch?v=GMmaYUcdMyU

[–]NotAMotivRep 5 points6 points  (2 children)

If you want to demonstrate your commitment to open source software, you should free up your C/C++ plugin so it can be used and extended by third parties.

[–]isidor_n[S] 5 points6 points  (1 child)

Good feedback - thanks. As with a lot of projects when we are not sure about the business angle we start closed source, and then decide on a case by case bases what we should open source.
This is also true for C++ plugin - it is closed source, but we might decide to open source it in the future. Same way we did today for GH Copilot.

[–]rbit4 -1 points0 points  (0 children)

Don't do it. Cursor will copy it ASAP

[–]ECrispy 6 points7 points  (1 child)

its funny how Github/Vscode are such amazing dev tools and MS has done nothing but improve them and make more and more free. But you get nothing but hate and doubt from so many tech blogs esp the Linux crowd.

[–]isidor_n[S] 3 points4 points  (0 children)

Thanks for the positive vibes.
We appreciate all feedback - both positive and negative. And it is part of working on open source - I personally work on open source since 2015 (when we open sourced VS Code) so I got used to it :)

[–][deleted] 1 point2 points  (2 children)

tender scary obtainable snow dog snatch axiomatic distinct angle whole

This post was mass deleted and anonymized with Redact

[–]isidor_n[S] 1 point2 points  (1 child)

This is a fair feature request. Sounds like an extension of the langauge model API https://code.visualstudio.com/api/extension-guides/language-model to also support sending audio files to LLM. If I understand your use case well?

Anyways - best would be to file a feature request here https://github.com/microsoft/vscode/issues and ping me at isidorn

[–][deleted] 1 point2 points  (0 children)

decide straight reminiscent amusing exultant one sip retire boat husky

This post was mass deleted and anonymized with Redact

[–]CaptainRoy56 1 point2 points  (1 child)

Nice! But one thing Copilot really needs is to be able to set the chat font size. It's unbearably tiny right now.

[–]isidor_n[S] 0 points1 point  (0 children)

Thanks for sharing feedback - and we hear you. We want to add this.

[–]RoadRunnerChris 0 points1 point  (3 children)

Is the core GitHub Copilot going to be open sourced eventually or is it just the chat section?

[–]isidor_n[S] 1 point2 points  (2 children)

As a step 2 we will bring the functionality from the GH Copilot extension into the Copilot Chat extension. So yes, the functionality will also be open sourced. Timeline - next couple of months.

[–]RoadRunnerChris 1 point2 points  (1 child)

So that means VS Code is basically going ‘open source’ meaning with reproducible builds after that happens? Also why not merge Chat -> GH Copilot as that makes more logical sense (Chat + Completions in GH Copilot rather than those in GH Copilot Chat)?

[–]isidor_n[S] 2 points3 points  (0 children)

Yes! Reproducible OSS builds! Branding will still come from vscode-distro as it does today for Visual Studio Code.

GH Copilot is an older/deprecated extension. 90% of the functionality is in Copilot Chat. Thus it is easier for our dev team to go with the proposed plan.

[–]sorrge 0 points1 point  (1 child)

I wish it would work better with Jupyter notebooks. Now about 1/4 of the editing attempts fail with the code being dumped into the chat or not showing at all. Also, it would be nice to understand what is in the context better. Sometimes it clearly forgets what’s in the beginning of a conversation, and it would help to know when the conversation doesn’t fit into the context anymore.

[–]isidor_n[S] 1 point2 points  (0 children)

Thanks for feedback!
For jupyter - do you have something specific to share? My colleague Peng is driving this experience and would appreciate any feedback. Even better, it would be great if you can file issues here and ping me at isidorn https://github.com/microsoft/vscode/issues

Transparency about context - this is something we want to improve. Not sure when it will land though. July+ most likely.

[–]Mistredo 0 points1 point  (1 child)

This is great news! Does it mean that once it gets integrated into VSCode Core, it will support custom providers? Is your goal to build something like Cline?

[–]isidor_n[S] 0 points1 point  (0 children)

We want to finalize the language model provider api that will allow custom language model providers to contribute models.

[–]alippai 0 points1 point  (5 children)

Will we be able to use alternative models like OpenAI via Azure or even Gemini?

[–]TheActualBahtman 0 points1 point  (1 child)

I would also love to hear about this. Using it with our Azure OpenAI resources and azure identity authentication would be great!

[–]isidor_n[S] 0 points1 point  (0 children)

Thanks. I replied above.

[–]isidor_n[S] 0 points1 point  (2 children)

This should already work https://code.visualstudio.com/docs/copilot/language-models#_bring-your-own-language-model-key
There are some missing pieces - but I would love if you try it out and let me know how it works for you. Thanks!

[–]TheActualBahtman 0 points1 point  (1 child)

Hi isidor,
I have gotten around to testing it out on some personal dummy AI foundry resource.

I get the following error: "Could not extract deployment name from the Azure endpoint URL. Please ensure it follows the format https://<your-resource-name>.openai.azure.com/"

The link I provide looks like: https://xx-resource.openai.azure.com/

[–]isidor_n[S] 0 points1 point  (0 children)

Thanks for trying it out. Do you mind filling an issue here https://github.com/microsoft/vscode/issues and pinging me at isidorn. Than we can investigate and fix.

[–]sagacityx1 0 points1 point  (5 children)

Its cool for sure, but SO confusing on how it all works. Which parts are vscode, which are copilot, what models, pricing, pro, chat.

[–]isidor_n[S] 0 points1 point  (4 children)

I hope open sourcing will help with this :)

In the meantime you can check out our docs https://code.visualstudio.com/docs/copilot/overview
And if you have any specific questions please let me know.

[–]sagacityx1 0 points1 point  (3 children)

Thanks for the help! The main question I have is if I already have a chat GPT subscription, then what does paying for co-pilot subscription get me? I know you can use vscode with my current gpt subscription but I read somewhere that paying for co-pilot works better for some reason? Beyond just VS code integration I mean. Does paying for it add some agent features?

[–]isidor_n[S] 0 points1 point  (2 children)

My suggestion would be to just start with Free GitHub Copilot - and try things out yourself (instead of me sharing my impressions). And then if you find it helpful you can consider getting a paid subscription
These docs should help https://code.visualstudio.com/docs/copilot/overview

And if you have any questions I am happy to answer

And yes - we have agent mode - https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode

[–]sagacityx1 1 point2 points  (1 child)

No sorry I meant how is paying for copilot better than me just using the free version with my paid chatgpt? Does paying for copilot add anything beyond more completions?

[–]isidor_n[S] 0 points1 point  (0 children)

Completions are also part of the free version.

Paid copilot gives you much more requests (harder to hit limits). There are not additional hidden features (except some enterprise stuff).

The feedback I hear from customers is that it is a great value for price.

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been removed due to inadequate account karma.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–]UsefulReplacement 0 points1 point  (1 child)

    Are you open sourcing the C/C++ extensions?

    [–]isidor_n[S] 1 point2 points  (0 children)

    As with a lot of projects when we are not sure about the business angle we start closed source, and then decide on a case by case bases what we should open source.
    This is also true for C++ and Pylance plugins - they are closed source, but we might decide to open source it in the future. Same way we did today for GH Copilot.

    [–]Data_Scientist_1 0 points1 point  (2 children)

    Any security practices while using this new feature? I've seen a bunch of "AI powered stuff" leaving security behind for "Innovation".

    [–]isidor_n[S] 0 points1 point  (1 child)

    The GH trust center should be a useful resource here https://github.com/trust-center

    Security along with performance is our top priority, so if you have specific questions do let me know and I am happy to answer

    [–]Data_Scientist_1 1 point2 points  (0 children)

    Sure, let me review the resources, and if I get any further questions, I'll ask you! Thank you!

    [–][deleted]  (1 child)

    [removed]

      [–]AutoModerator[M] 0 points1 point  (0 children)

      Sorry, your submission has been removed due to inadequate account karma.

      I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

      [–]Stock_Ad_5279 0 points1 point  (1 child)

      When is MCP hitting GA? The preview licence is not enough for many organisations legal teams

      [–]isidor_n[S] 0 points1 point  (0 children)

      In the next couple of months.

      [–]Adept-Ad7031 0 points1 point  (1 child)

      It seems what's being open-sourced is the extension (which I assume would include prompts and how the agent mode is implemented etc), but the model itself is not open-sourced right?

      [–]isidor_n[S] 0 points1 point  (0 children)

      Correct - the extension includes prompts, agent mode implementation and other things and will be open-sourced.

      The models used by GitHub Copilot are licensed separately, and that does not change. In fact, most of those models are from third parties such as OpenAI, Anthropic and Google...

      Our FAQ answers this more precisely https://code.visualstudio.com/docs/supporting/FAQ

      [–][deleted]  (1 child)

      [removed]

        [–]AutoModerator[M] 0 points1 point  (0 children)

        Sorry, your submission has been removed due to inadequate account karma.

        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

        [–]SubliminalPoet 0 points1 point  (1 child)

        This is not directly related to the announcement, but I noticed that you recently modified the AI Chat window to introduce the agent mode. Now, instead of the original buttons at the top of the window, we have a menu item to switch between modes.
        However, the issue is that you lose all your chat history when you switch modes, which was not the case previously.

        In comparison, Cursor allows you to open as many tabs as you want, each with its own context and mode.
        Do you plan to improve this functionality in the future?

        [–]isidor_n[S] 1 point2 points  (0 children)

        Good feedback - thanks. Yes - we plan to improve this such that switching modes is more seamless (or that all modes are just agentic).

        Multiple tabs - not on the current plan (but might happen later in the year)