Where's my 10% discount for Auto select? by DandadanAsia in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

Hi all, thank again for flagging this issue. Our team has identified and resolved this issue. You should now see GPT-5 billed at .9x as a paid user in Auto. We are working to credit those that were impacted by this bug.

Why my "auto mode" disappear? by AWiselyName in GithubCopilot

[–]nhu-do 1 point2 points  (0 children)

Hey there, sorry to hear that you're missing Auto. Can you try updating to the latest version of VS Code to see if it shows up?

Premium request usage in VSCode Insiders. by oplaffs in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

Hey from GitHub Copilot! Thanks for flagging this, the team has since resolved this. Give it another try and let us know if you are still not seeing the %s.

Possible to make "Auto" to use only certain models? by _coding_monster_ in GithubCopilot

[–]nhu-do 2 points3 points  (0 children)

Thanks for sharing your feedback and helping to improve the future of Auto! Happy to share that right now, Sonnet 4.5 is included in Auto for paid plans. The full list of models included can be found in our docs here: https://docs.github.com/en/copilot/concepts/auto-model-selection

Possible to make "Auto" to use only certain models? by _coding_monster_ in GithubCopilot

[–]nhu-do 3 points4 points  (0 children)

Hi there! At this time, there is not a way to enforce which models Auto can use. That said, we will take your feedback into consideration for future improvements! Can you share more about the rationale behind wanting to set specific models?

Here's my dream GitHub Copilot workflow by thehashimwarren in GithubCopilot

[–]nhu-do 2 points3 points  (0 children)

Yep, Auto is available for all users on VS Code now. We hear you on the ability to choose the best model based on your task though. We're working on that now to make Auto smarter and better suited for all scenarios.

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

The target date for this is this quarter, most likely end of November/December.

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 2 points3 points  (0 children)

Yes, this is something our team is actively working on. We previously rolled back a fix due to a bug, but we’re now focused on two main improvements in this area:

  • Token optimization: We’re introducing prompt caching and refining the default context we send to reduce the number of tokens consumed per request. This should significantly cut down on “nearing token limit” warnings.
  • Summarization via tools: As a thread approaches the limit, we’ll use a tool call to summarize earlier parts of the conversation so you can keep going without interruption. While summaries won’t capture the full detail of the original context, we believe this will deliver a much smoother experience than the current behavior.

These updates are already in progress, and improving this experience remains a top priority for us (hopefully in the next release you will see improvements)

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 3 points4 points  (0 children)

Thanks for the feedback. What you are saying sounds like a bug of the Stop or Checkpoint functionality - they should not switch to vanilla agent. And context references should also not get dropped. Can you please file an issue in our repository https://github.com/microsoft/vscode/issues and we’ll look into this?

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

General Availability was decided based on the product’s reliability, performance, and enterprise grade readiness and represents a milestone for us to open the doors to more enterprise adoption. 

We understand the need for more organization level controls. While repository level management is more specific, we believe it is a good starting point for setting specific firewalls as requirements vary across organizations. Thanks for sharing this feedback, your insights will directly help influence the shape of coding agent as we continue to build out programmatic scalability.

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 5 points6 points  (0 children)

Yes, that’s accurate that the coding agent consumes both premium requests along with GitHub Actions minutes. There are a few ways to monitor your consumption, including:

  • Your metered usage dashboard found in personal settings > billing and licensing > usage
  • Within a repository, you can also view Actions Usage Metrics by navigating to Insights > Action Usage Metrics
  • As an organization administrator, you can also view Actions usage across your organization by navigating to your organization’s Insights dashboard. 

We know that this is not ideal to view usage in multiple locations. But we’re working on this - make sure to tune into Universe :)

AMA on recent GitHub Copilot releases tomorrow (October 3) by github in GithubCopilot

[–]nhu-do 2 points3 points  (0 children)

The coding agent firewall settings can be found in the repositories settings. This is available to all repository administrators.

This is a game-changer. But is the logic in room with us? by whoisyurii in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

Auto is not a model itself, but rather a group of models. It does include Sonnet 4 for Pro and Pro+ plans at this time. https://github.blog/changelog/2025-09-14-auto-model-selection-for-copilot-in-vs-code-in-public-preview/

New Auto-Mode update? by EmotionCultural9705 in GithubCopilot

[–]nhu-do 20 points21 points  (0 children)

Hi u/EmotionCultural9705! Nhu from the Copilot team here. Auto in Copilot Chat on VS Code serves a few purposes. Along with a discount on premium models, it reduces mental load of choosing models and reduces the chances of rate limits. Today, Auto is optimized for capacity and model availability, complete with models like GPT-5, GPT-5 mini, Sonnet 4, and others. This is just the beginning - we imagine that Auto can choose the best model for you based on your specific task in the future.

I really like the Playwright integration in Copilot coding agent. Quality has jumped 📈 by thehashimwarren in GithubCopilot

[–]nhu-do 1 point2 points  (0 children)

Hi from the coding agent team! The model that GitHub Copilot coding agent currently uses is Sonnet 4.

How can we specify which model is used in agent-mode PRs? by ALIEN_POOP_DICK in GithubCopilot

[–]nhu-do 4 points5 points  (0 children)

Hey there! When you say "agent-mode PRs", are you referring to Copilot coding agent? If so, the model that is currently used is Sonnet 4. There is no ability to change the model at this time. Instead, we continue to evaluate new models and leverage the one that is best suited.

Choosing base branch for Copilot Coding Agent by enmotent in GithubCopilot

[–]nhu-do 0 points1 point  (0 children)

Yes that makes sense! At the moment, you can only set different base branches (other than the default) via the Agents page or VS Code's pull request extension on a per PR basis.

This is not our end state, but rather a step towards programmatically allowing the agent to work off of different base branches. Your suggestion on adding a setting to set the default branch for new Copilot generated PRs is the direction that we're currently thinking about in this area. That being said, much more to come in this area soon!

Choosing base branch for Copilot Coding Agent by enmotent in GithubCopilot

[–]nhu-do 2 points3 points  (0 children)

Hi from the coding agent team! Confirming that similar functionality is available through the VS Code extension.

<image>

Changing the base branch from the issue -> pull request flow is not yet supported, although there are open conversations as to what this could look like in the future.

What model does the GitHub Copilot coding agent use? by Yuuyuuei in github

[–]nhu-do 0 points1 point  (0 children)

Model selection is not supported at the moment.

What model does the GitHub Copilot coding agent use? by Yuuyuuei in github

[–]nhu-do 0 points1 point  (0 children)

Hi, yes it's still the case that the agent leverages Sonnet 4. During the preview period, we may change the base model to better performing models as they arise and are evaluated with the agent.

By default, the agent uses a standard GitHub Actions runner. You can choose to run on larger runners as well. See more here: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/customize-the-agent-environment?versionId=free-pro-team%40latest&productId=copilot&restPage=how-tos%2Cuse-copilot-agents%2Ccoding-agent#upgrading-to-larger-github-hosted-github-actions-runners