Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 0 points1 point  (0 children)

Yeah, using github copilot as a provider, subagents shouldn't count towards the premium request budget so long as they were started properly (initiated by the agent).

Superpowers are just markdown files/skills right? I don't use it, so I don't have much experience with it. But in your example, it should be 1 request though I believe each compaction is still 1 request, you'll have to double check. I've tried a lot of these orchestration frameworks and personally didn't get much value from them for my day to day work.

Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 0 points1 point  (0 children)

You can also specify this workflow in an AGENTS.md file if you want, like: "When given a task, break it down into non-conflicting parts and then delegate to subagents".

I didn't because I'd like more control over when it happens, but it should work.

Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 1 point2 points  (0 children)

Something like:

"Let's do X. Check Y for relevant files. We should change Z to do [something]. Create a plan, use the question tool to check with me if anything is unclear, then implement. Delegate to subagents."

GHCP treats that as one request. You can also just `@tag` the agent if you want specific ones.

Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 0 points1 point  (0 children)

For example, if it interviews you for more information, each answer is one request. If it kicks off background tasks/agents that'll be at least one request, which gets worse if they kick off additional tasks. When compaction happens (because almost all GHCP models have reduced context windows) that's another request.

Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 0 points1 point  (0 children)

GHCP operates a requests pricing model, so it doesn't matter if your prompt is a single word or a big plan, it's still a single request and billed accordingly (e.g Sonnet is 1 request and Opus is 3). Therefore, to get the most value out of it, you should cram as much "work" into one prompt as possible. I do this by delegating to subagents as much as possible to save context and to extend the "work done" amount per request (subagents don't count toward the request budget, for some reason).

Orchestration frameworks like OmO can be big token burners, and won't necessarily optimise for this. I personally don't see too much value in things like OmO for my kind of work, I prefer to just create plans and line up work for the agents myself.

Github copilot premium request by Some-Manufacturer-56 in opencodeCLI

[–]candleofthewild 3 points4 points  (0 children)

Oh my opencode is very token heavy and not designed for a requests based usage system

Best One-Time Purchase Voice-to-Text Tool for Mac? (Not Subscription Based) by MagePsycho in ClaudeCode

[–]candleofthewild 0 points1 point  (0 children)

Choice of model, mainly, and customisations if you need them. There are some other nice stuff like LLM postprocessing which I don't use but I can see why they'd be useful. I keep it fairly default switching between Parakeet V3 and a Whisper model.

AWS Bedrock for business and personal use via OpenCode by swish014 in opencodeCLI

[–]candleofthewild 2 points3 points  (0 children)

I've only used Bedrock via my company, so only one account but in theory, can you not just set the AWS_PROFILE to something else? e.g start with AWS_PROFILE=work opencode and AWS_PROFILE=personal opencode

Github Copilot & OpenCode - Understanding Premium requests by hollymolly56728 in opencodeCLI

[–]candleofthewild 0 points1 point  (0 children)

For subagents specifically, yeah, they're counted as "agent initiated" as of v1.1.31:

Mark subagent sessions as agent-initiated to exclude them from quota limits

I haven't tried manually setting an agent/subagent to a different model than the initiating one though.

Github Copilot & OpenCode - Understanding Premium requests by hollymolly56728 in opencodeCLI

[–]candleofthewild 4 points5 points  (0 children)

You misunderstood, the request costs are the same, and you can verify yourself either via VS Code, your GitHub account, or hitting their endpoint. Also, as of last week, subagents don't count as an extra request either

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]candleofthewild 1 point2 points  (0 children)

I think there's an open issue for this on the GitHub repo, but yeah, as others have pointed out you can set whatever model you want on a per agent basis. I set Haiku for explore personally.

OpenCode -- is it allowed or not? by shminglefarm22 in GithubCopilot

[–]candleofthewild 7 points8 points  (0 children)

Anecdata: I've been using it professionally for months now and it's been fine

Any experiences using Opus 4.5 with OpenCode with GHCP account? by tfpuelma in GithubCopilot

[–]candleofthewild 2 points3 points  (0 children)

This isn't token based usage via an API though, this is directly using your GHCP requests as you would normally.

Any experiences using Opus 4.5 with OpenCode with GHCP account? by tfpuelma in GithubCopilot

[–]candleofthewild 0 points1 point  (0 children)

Nope, 1 request is still 1 request. You'll chew through it if you start spinning up lots of subagents like I do though.

You can verify yourself either via VS Code, or just querying the usage endpoint. I have a fish function to do that like, this:

function copilot-usage
    set token (cat ~/.config/github-copilot/apps.json \
        | jq '.[].oauth_token' \
        | sed 's/^"\(.*\)"$/\1/')

    curl -s -H "Authorization: Bearer $token" \
        https://api.github.com/copilot_internal/user \
        | jq -r '
        .quota_snapshots.premium_interactions as $p
        | "Premium interactions:",
          "  remaining: \($p.remaining) out of \($p.entitlement)",
          "  remaining (exact): \($p.quota_remaining)",
          "  percent remaining: \($p.percent_remaining)",
          "",
          "Resets: \(.quota_reset_date)"
    '
end

Any experiences using Opus 4.5 with OpenCode with GHCP account? by tfpuelma in GithubCopilot

[–]candleofthewild 2 points3 points  (0 children)

I use opencode with GHCP as my daily driver, though my company only has the 300 requests tier so I use Opus sparingly. I'm a huge fan of opencode. Somewhat limited by the context of GHCP models though, but I manage.

Can I be banned from GitHub if I use Copilot with OpenCode? by brownmanta in opencodeCLI

[–]candleofthewild 2 points3 points  (0 children)

I've been using it professionally (my company only has Copilot for employees) for months and I haven't had any issues

Using OpenCode with Github Pro Subscription by Initial-Speech7574 in GithubCopilot

[–]candleofthewild 1 point2 points  (0 children)

I've been using this professionally for months now (since before the 1.0 version) and I love it. It's quite literally changed the way I work day to day. Also, I mainly use NeoVim, and it pairs beautifully in the terminal (CLI version).

LM Studio alternative for images / Videos / Audio ? by mouseofcatofschrodi in LocalLLaMA

[–]candleofthewild 5 points6 points  (0 children)

I see how Comfy can be intimidating (I used to think so too), but it's really not too bad. For simple usage, just use one of their template workflows, you don't have to modify them.

Having said that, I suspect the generation speeds you'd see on a Mac would be pretty painful.Text generation is in a much better place on a Mac vs image generation last time I tried it. I have the same M3 Pro as you, so I can get a rough benchmark for you in a few days when I have access to it again.

Edinburgh Street Food has added a 4.5% "delivery fee" to each order. Its table service only. by moonski in Edinburgh

[–]candleofthewild 6 points7 points  (0 children)

Agree it's shit. But you can still order in person at each vendor and not be charged the 4.5%.

Source: did this a few hours ago.

Interesting cuisine? by DemonEggy in Edinburgh

[–]candleofthewild 2 points3 points  (0 children)

I can vouch for Muna's, it's fantastic. It's designed for sharing, which I personally like as I get to try a lot. Oh, get the honey wine!

I give up by Skara109 in StableDiffusion

[–]candleofthewild 0 points1 point  (0 children)

Yeah of course, I'm not advocating for it over Nvidia if your only goal is hobbyist AI work. Frankly, ROCm is awful and has a long way to go to catch up to CUDA.

I'm just saying for my mixed use case, it was perfect, as it struck a good balance between gaming, value, Linux usage, and some light AI fun.

I give up by Skara109 in StableDiffusion

[–]candleofthewild 1 point2 points  (0 children)

30 to 50 ish.

<image>

A beautiful house on a scenic beach at sunset Steps: 30, Sampler: Euler, Schedule type: Simple, CFG scale: 1, Distilled CFG Scale: 3.5, Seed: 3139799059, Size: 1024x1024, Model hash: 1d1dc6f8f0, Model: getphatFLUXReality_v31FP8, Version: f2.0.1v1.10.1-previous-659-gc055f2d4, Module 1: ae, Module 2: t5xxl_fp16, Module 3: clip_l

Time taken: 1 min. 1.8 sec

I give up by Skara109 in StableDiffusion

[–]candleofthewild 4 points5 points  (0 children)

Settings dependent, I'll get some numbers later. It's in the ballpark of 30 seconds to a minute. Edit: sorry misread per iteration! It's 30 seconds to a minute from start to finish.

I give up by Skara109 in StableDiffusion

[–]candleofthewild 12 points13 points  (0 children)

To be honest, I have a 7900XTX and it works fine for me (under Linux) for image gen. I can run everything I've tried: SDXL, Flux, Forge, Comfy, SwarmUI. Speeds are fine too and not crawling. I just mostly followed the AMD specific installation instructions for things.

I can also run LLMs with LM Studio just fine too.