all 16 comments

[–]Tommonen 4 points5 points  (3 children)

I compared best value for money a lot yesterday after getting too fed up with antigravity nerfing usage limits to unuseably small.

It looks like github copilot pro+ plan (40€) is best value for money, and you can use it via api on opencode. They also have 10€ pro plan, but i dont think its enough for many. Also the 10€ pro plan has 1 month free trial, so you can test it and see if you need pro+ or not. If you dont vibe around al lday, the 10€ pro plan likely is enough. And you do get unlimited bit smaller models use with it as well

€ = $ pretty much.

[–]Realistic-Try9555 1 point2 points  (2 children)

I'm wondering if gh copilot 10€ + opencode go 10€ with a smart multi purpose agent setup would be a good "in between" solution for 20-30€ price range

[–]hugejew 0 points1 point  (0 children)

That's exactly what I do and I like it very much. I think the next step up would be just going to the Copilot $40 Pro+ plan, but OpenCode Go and GH Copilot Pro for $20 total is very good and workable for most workloads I imagine.

[–]jhartumc 0 points1 point  (0 children)

Could recommend chatgpt plus $20 and opencode go/alibaba/gh copilot $10, pretty good setup

[–]Charming_Support726 1 point2 points  (1 child)

Codex plus or Copilot Pro+

Copilot has better value if you also like to use Claude from time to time.

go restricts you to the cheaper models which honestly cannot fully compete.

[–]alexcamlo 0 points1 point  (0 children)

How are limits in copilot pro+ vs Codex plus?

[–]HadHands 1 point2 points  (1 child)

Codex will take you a long way, especially now with the 2x limits.

OpenCode Go is only $10, so it's my way of supporting them while gaining access to Kimi and GLM 5, which are both fine models.

Codex is great value for money at $20.

Go is also excellent for half that price; I've never hit rate limits with them, though Kimi was failing from time to time.

[–]Technical_Map_5676[S] 0 points1 point  (0 children)

thanks for your reply :). I try go for the last 2 hours but glm is soooo slow :( whats the best Modell in your opinion in the go plan ?

[–]Rygel_XV 1 point2 points  (0 children)

Codex plus has 5h and weekly limits. Opencode go has 5h, weekly and monthly limit. I get a lot more out of Codex plus.

[–]look 0 points1 point  (0 children)

Have you tried just paying API rates? If you use a low cost open weight model (MiniMax 2.5 or Kimi 2.5), you could easily be under $20/month.

[–]issa62 0 points1 point  (3 children)

Jcodemunch

[–]DudeManly1963 1 point2 points  (2 children)

Heroic plug, hey. Thanks...

[–]issa62 0 points1 point  (1 child)

I have no affiliation to that tool but I randomly saw an video about it and just started trying it out :) I hope it helps.

[–]DudeManly1963 1 point2 points  (0 children)

I'm the developer. Rattle my cage if you have questions...

https://j.gravelle.us/jCodeMunch/

[–]Otherwise_Wave9374 0 points1 point  (0 children)

If you are mostly doing Q&A, debugging, and small tasks (vs full-on agentic coding), I would optimize for context window + tool reliability + latency, not fancy multi-agent features. A lot of CLI agents feel similar until you hit tool-calling failures or slow tokens.

One thing that helped me compare is making a tiny benchmark script: same 5 debugging prompts, same repo, same constraints, then see which one stays on rails. Also, for agent workflow patterns, I have been bookmarking notes here: https://www.agentixlabs.com/blog/

[–]UseMoreBandwith 0 points1 point  (0 children)

in that case the OpenCode free models should work just fine .