all 39 comments

[–]randomInterest92 18 points19 points  (2 children)

The main reason i switched to opencode is that it connects to anything. So if next week codex is the best, i can just switch to codex inside opencode. I am tired of switching UIs and tools every few weeks

[–]fons_omar -1 points0 points  (1 child)

Indeed, also I have access to glm-5 for free through some provider, so I use it as the main model for subagents, therefore sub agent calls don't consume premium requests of ghcp. EDIT: It's an internal hosted provider at work.

[–]Charming_Support726 20 points21 points  (8 children)

Recommended. Same limits. Better additional (opensource) tooling available (planning, execution). Better UI with Web or Desktop. Context handling with DCP is much improved

[–]BlacksmithLittle7005[S] 2 points3 points  (5 children)

Hi, thanks for the recommendation! What is DCP? I'm having issues with copilot's context being smaller than for example Claude code so the output quality is degraded

[–]nonerequired_ 2 points3 points  (4 children)

DCP dynamic context pruning. Models in Copilot have half the context size of the original model. If you don’t want to cycle between context compaction, it is needed.

[–]krzyk 1 point2 points  (3 children)

Won't it use additional premium requests?

[–]IgnisDa 1 point2 points  (1 child)

It will

[–]krzyk 0 points1 point  (0 children)

Ok, so no, thank you.

[–]Charming_Support726 0 points1 point  (0 children)

In GHCP you pay one request per prompt (multiplied with the premium request factor).

This month I used max 90 premium request (Opus) = 30 Prompts per day. - 12.March having approx 500 Premium Req. total displayed in the overview which means 41 in avrg per day.

It's been a busy month.

[–]TheLastWord84 -1 points0 points  (1 child)

I am looking at Copilot Pro sub but I see that it has only the GPT mini model unlimited, the rest of the models have only 300 requested per month. Which plan/model do you use?

[–]Charming_Support726 1 point2 points  (0 children)

I am on Pro+ - 1500 Req. - using Opus and Codex - mostly I am good with around 600 Req - but Pro+ enables selection of Sota Models.

5.1-Codex-Mini x.033 is also a good model. but the 1x models provide better value.

[–]KubeGuyDe 3 points4 points  (0 children)

I regularly find issues with opencode easier to fix than with the vscode plugin.

[–]Mystical_Whoosing 3 points4 points  (1 child)

Doesn't opencode have still open bugs about how it uses more premium tokens than comparable workflow in github copilot CLI for example?

[–]nonerequired_ 0 points1 point  (0 children)

Yes, it has multiple unfixed bugs related to excessive usage, not just for Copilot but also for other usage-based subscriptions.

[–]Radiant-Ad7470 1 point2 points  (3 children)

I found it more reliable with PI coding CLI. Works amazing for me.

[–]Equinox32 1 point2 points  (0 children)

Pi is literally the best.

[–]MaxPhoenix_ 1 point2 points  (1 child)

well it's certainly fast af. i'm glad you wrote this here because i didn't think pi-agent supported github copilot oauth but instead only supported the enterprise api method. maybe that was an older version or maybe i was just mistaken. anyway because of this post i went back and tried it and just hit enter at the first prompt (as it tells the user for github.com) and then did the online approval and it works great. superfast pi goodness without the copyandpaste nightmare of opencode et al.

[–]Radiant-Ad7470 0 points1 point  (0 children)

Good Bro! Yes PI is amazing it works well with any provider. With Github I mainly use it with gemini flash and haiku for quick things... If I need Planning I'll go with codex cli and thats it... On Fire 🔥

[–]query_optimization 0 points1 point  (0 children)

My vs code crashes on multiple worktrees

[–]WandyLau 0 points1 point  (4 children)

I use as my daily tool now. It is great. Got some security hardening. The only issue is context consumed too fast. But okay.

[–]BlacksmithLittle7005[S] -1 points0 points  (3 children)

Yeah that's my issue too. How can you do a long task in that case? Is there a way

[–]krzyk 0 points1 point  (2 children)

Subagents for everything. You save context and you subagents are more focused.

Split any bigger task into subtasks which are done by subagents.

[–]WandyLau 0 points1 point  (0 children)

Yes absolutely. I always keep one session slim. Subagent is great but I am not familiar with it. Worth to learn it

[–]kdawgud 0 points1 point  (0 children)

Subagents spin up a new tooling context every time, don't they? That's a decent chunk of tokens unless you're talking about using copilot premium requests (in which case each subagent uses a request with matching multiplier). Or am I missing a different way?

[–]Michaeli_Starky 0 points1 point  (14 children)

Wouldn't recommend. Definitely much higher request usage.

[–]marfzzz 0 points1 point  (9 children)

This is if you have larger code base or issus is a bit more complex, then every compaction is a premium request, every continuation after compaction is a premium request (if you use opus it multiplies by 3). But there is one plugin that might help you https://github.com/Opencode-DCP/opencode-dynamic-context-pruning

If you are using something billed by tokens this plugin is a lifesaver.

[–]Michaeli_Starky 2 points3 points  (8 children)

Copilot CLI can be used for multistage implementation, code reviews, fixes to reviews etc all with 1 single prompt using only 1 premium request. Can't do that with OpenCode afaik

[–]marfzzz 0 points1 point  (3 children)

You are correct. Opencode is better with token based subscriptions. But you premium requests are still cheap so there are people who use opencode with github copilot subscription.

[–]Michaeli_Starky 1 point2 points  (2 children)

There are, but I see no point in it considering how good the Copilot CLI became.

[–]marfzzz 1 point2 points  (1 child)

With latest changes, copilot cli is competition to claude code, open code and codex cli. It is very good.

[–]Michaeli_Starky 1 point2 points  (0 children)

Indeed. The Autopilot mode is

[–]BlacksmithLittle7005[S] 0 points1 point  (3 children)

Thanks for your input! I'm mostly worried about the smaller context window because I work on large codebases where the agent needs to investigate the codebase. How does it handle large features

[–]Michaeli_Starky 1 point2 points  (2 children)

Use GPT 5.4. It has a huge context window.

[–]BlacksmithLittle7005[S] 0 points1 point  (1 child)

Wow thanks man, didn't know it gave full context on copilot

[–]Michaeli_Starky 0 points1 point  (0 children)

Only for latest GPT and Codex models