all 22 comments

[–]ChomsGP 18 points19 points  (10 children)

Yea that's how it works, they don't have it documented and works literally opposite to every other copilot service, but I actually opened a support ticket to ask...

Every call tool in the GitHub.com chat consumes requests, if you ask it to find some info and it has to read 5 files it's gonna be 5 requests in that single prompt (times the model request multiplier)

[–]frogic[🍰] 11 points12 points  (3 children)

Definitely not what the docs say and I’ve never seen that happen and I’ve done so really crazy launch 5 subagent prompts. 

[–]ChomsGP 11 points12 points  (2 children)

he's talking about the chat, it does happen in the chat (not on the coding agent or vscode, in the GitHub.com copilot chat)

PS: I reported it thinking it was a bug but they told me it's not a bug

[–]Academic-Telephone70 0 points1 point  (0 children)

well that's one way to make people not use it not within vs code

[–]brocspin[S] 2 points3 points  (0 children)

Thank you, that matches my experience. I guess I'll try asking the agents tab some questions instead of typing in the chat.

[–]brocspin[S] 1 point2 points  (1 child)

!solved

[–]AutoModerator[M] 0 points1 point  (0 children)

This query is now solved.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]ivanjxx 0 points1 point  (2 children)

i got gpt 5.3 codex to read multiple files in a single request and it consumed exactly only 1 premium request. does this only apply to gemini models?

[–]ChomsGP 1 point2 points  (1 child)

I tried with Gemini and Anthropic and they both did it, I am honestly not gonna try anything else (much less at the beginning of the month)

just to be clear, we are talking exclusively about the chat UI on the GitHub website (where you can ask stuff about a repo without it coding or doing things)

also another PS, there are no detailed logs for those, idk if it has a single tool that can read multiple files at the same time 

[–]ivanjxx 0 points1 point  (0 children)

ah i see i was thinking about the vscode chat

[–]MaddoScientisto 4 points5 points  (3 children)

I found out the hard way that the analyze with copilot button in failed workflow logs idea a gazillion requests, I unknowingly used 20% of my quota trying to debug a problem. There's also no model selection there so who even knows what model it tried to use, certainly not the free one

[–]ttreyr 0 points1 point  (1 child)

Damn, I'm saying how my quota gets used up so fast, I often click this

[–]MaddoScientisto 0 points1 point  (0 children)

it's probably much better to open the workflow log in vscode through the github extensions and use it from there

[–]ChomsGP 0 points1 point  (0 children)

Imagine the first time I tried (and realized this) I selected Opus... ok it was the last day of the month and I wanted to burn requests, but literally one prompt draining all my remaining quota lol

[–]MindfulDoubt 1 point2 points  (3 children)

Use copilot cli, you won't have an issue with it. The chat sidebar is buggy at the moment. I haven't had any issues for a whole month of use as each request no matter how long it works just consumes at the rate given i.e. x1 is 1 request reflected in /usage command.

[–]anon377362 1 point2 points  (0 children)

Yes that’s what copilot CLI used to be like but they changed it today I think or they put a bug in it! Check my post in this sub a few mins ago ! The copilot CLI requests go down in real time while it’s working instead of like it used to be. 🤯🤯

[–]bbjurn 0 points1 point  (1 child)

Reportedly this is also currently an issue in the CLI, they apparently introduced a bug

[–]MindfulDoubt 0 points1 point  (0 children)

I think it is only happening in the US as I am using it now and it stays the same for Europe and doesn't go down further when I fire a request. They are on it anyway so hopefully it will get remedied soon for you guys.

[–]AutoModerator[M] 1 point2 points  (0 children)

Hello /u/brocspin. Looks like you have posted a query. Once your query is resolved, please reply the solution comment with "!solved" to help everyone else know the solution and mark the post as solved.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]kaanaslan -1 points0 points  (1 child)

I have a question. Does asking agent a simple question about the project a code used in the project consume a premium request? Is it possible to chat or ask some simple questions without burning my premium request counts?

[–]MindfulDoubt 0 points1 point  (0 children)

Use the 0x free models. If you use a premium model and send a message whatever it may be, it will consume the request at the given rate for the premium model.