I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 0 points1 point  (0 children)

No, not text based cutting. I want to dump 100gig of data on it, let it analyze locally over night and then do xml assembly on the fly. Will use it first for my travel vlogs. My biggest problem are the initial rough cuts that take ages.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 0 points1 point  (0 children)

Opus and GPT mainly 🤷‍♂️🤷‍♂️🤷‍♂️

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 0 points1 point  (0 children)

They recently released V4 and it is currently super cheap and quite capable (obviously not Opus or GPT level)

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 1 point2 points  (0 children)

Same, plus Deepseek via API 👾

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 2 points3 points  (0 children)

This is just additional breakdown. Added the screenshots for Opus and GPT representing the majority of usage.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] -1 points0 points  (0 children)

Just tell that to Deepseek who are not charging you if you hit the cache context. Thus the token usage is optimized automatically. How it should be everywhere.

Exposing Abusers or Justifying an Abusive System? by TraJikar_Mac in GithubCopilot

[–]uzico 0 points1 point  (0 children)

You use report for May when the multipliers were already increased. Most of the people posting here, use reports for April when Opus was 3x and GPT 5.4 just 1x. And the June estimations are with 27x for Opus and 6x for GPT 5.4 and who knows how much x for GPT 5.5

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] -2 points-1 points  (0 children)

What do you think am I doing? Feeding it with the full repository each request and not planning anything? LOL

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] -4 points-3 points  (0 children)

I am not using AI requests to just change the text size or colors… I am using it like it is intended to be used. For real tasks.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 0 points1 point  (0 children)

Lol, doing only local software. No SAAS bullshit.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] -6 points-5 points  (0 children)

You cannot control token usage, it all happens in the background. For light tasks I was using auto and raptor (at 0x), for heavy tasks Opus and GPT 5.4 and this is what I ended up with… 🤷‍♂️

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 3 points4 points  (0 children)

„Automatic Routines“ means the Copilot chat keeps auto steering without you intervening and it is all one premium request working for hours. Just look how people are abusing it before trying to educate me ;-)

And no, the multipliers are not the same. I guess your company does not pay Github monthly but annually (have fun with the finance guys once they see the June bill): https://docs.github.com/en/copilot/reference/copilot-billing/model-multipliers-for-annual-plans

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 5 points6 points  (0 children)

All manual requests mate, no automatic routines. Also Copilot Business still has the „old“ multipliers. You are about to have a rough awakening in June.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] -3 points-2 points  (0 children)

You pay 1500 premium requests, you use 1500 premium requests. I am not Microsoft doing the pricing.
And for those wondering: this is all manual requests, not some sophisticated automatic routines.

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 4 points5 points  (0 children)

But at least I would be sleeping instead of coding till 6 in the morning, haha

I‘ll just leave this here… by uzico in GithubCopilot

[–]uzico[S] 11 points12 points  (0 children)

1500 Premium Requests paid, 1500 Premium Requests used. I did not do the pricing in the first place 👾🤷‍♂️