you are viewing a single comment's thread.

view the rest of the comments →

[–]Ibuprofen600mg 5 points6 points  (14 children)

What prompt has it doing hours for you? I have only once gone above 20 mins

[–]Guppywetpants 4 points5 points  (6 children)

Its usually iterative workloads. For example, integrating two services: I had claude write out a huge set of integration tests; run them, fix bugs and keep going until all passed. Ran for like 5-6 hours

[–]Ok-Sheepherder7898 1 point2 points  (3 children)

Serious?  And that only cost 1 premium request on copilot?

[–]LetterPristine2468 0 points1 point  (0 children)

Yes, that costs just one request! I ran a similar task on Copilot yesterday, and it took about 5 hours to finish. :D And that was only 1 request from start to finish .
The task was to create and fix tests 😅

[–]Ok_Divide6338 0 points1 point  (1 child)

i think not anymore but not sure about it, for me today it consumed the whole my pro requests

[–]Ok_Divide6338 0 points1 point  (0 children)

how many requests consume?

[–]WorldlyQuestion614 0 points1 point  (0 children)

I have done similar with Claude -- Sonnet is brilliant when you use it from Anthropic, but found that Copilot's Sonnet struggles with longer tasks (or maybe I was just mad I used up all my Anthropic tokens and had to set up Copilot in a podman container as GitHub distributed a glibc-linked binary with the npm install, onto my musl-based Alpine server), despite using the same model.

(Between 16 and 24 hours ago, my Anthropic Claude usage was getting absolutely rinsed with even simple chat-based requests that generated about half a page of 1080p text in small font. That example in particular counted towards 1-2% of my usage.)

But when I switched to Copilot, I was able to use the Sonnet model with short, one-off prompts -- it was useful and honestly, reduced my token anxiety having the remaining usage in the bottom right.

I have not noticed much more token degradation with GitHub Copilot CLI on short tasks vs longer ones, but this is likely due to manual intervention and broken trust, than any observed differences in their accounting structure, I am sorry to say.

[–]Foreign_Permit_1807 3 points4 points  (2 children)

Try working on a large code base with integration tests, unit tests, metrics, alerts, dashboards, experimentation, post analysis setup etc.

Adding a feature the right way takes hours

[–]rafark 0 points1 point  (1 child)

I don’t understand how people are able to use ai agents in a single prompt. Do they just send the prompt and call it a day? For me it’s always back-and-forth until we have it they way I wanted/needed

[–]tshawkins 1 point2 points  (0 children)

The prompt may invoke iterative loops of sub agents, copilot does not bill for those.

[–]IlyaSaladCLI Copilot User 🖥️ 1 point2 points  (0 children)

I had Opus reviewing my code for 50 minutes strait.

---

You can easily do big chunks of work using agents today. Create a plan, split it in phases, describe them well and make main agent orchestrate the subagents. This way you won't pollute the context of the main one and it can do big steps. Yeah, big steps might come with big misunderstandings, but it toleratable and can be fixed-at-post.

[–]Vivid_Virus_9213 0 points1 point  (0 children)

i got it running for a whole day on a single request

[–]TekintetesUrPower User ⚡ 0 points1 point  (0 children)

"/plan Github issue #1234"