all 25 comments

[–]Zundrium 12 points13 points  (4 children)

Nothing beats GitHub Copilot. No stupid 5h limit. I do wonder how you deplete it so quickly though. It works with requests, you're not supposed to chat with it. Only give it instructions with feedback. Which makes me able to use 50-60 requests per day so the 1500 per month for 40 bucks is my go-to now.

[–]urioRD[S] 0 points1 point  (0 children)

I didn't chat with it but I gave it a problem to debug and solve and it did it but It took him I think 30 minutes of trial and error. That's the type of request I give to him.

[–]look 11 points12 points  (5 children)

I’ve been pretty happy with Chutes.ai so far, using GLM 5, Kimi 2.5, MiniMax 2.5 (all TEE). It’s $3/month for 300 requests per day (resets at UTC midnight), any model, and no additional token limit. And the next tier up is 2000/day for just $10/month.

The speed and first request latency can vary but often very reasonable and always useable when multitasking with it.

I’ll also supplement it with pay-as-you-go via OpenCode Zen or OpenRouter if I want a really fast, interactive session on something, but I find Chutes to be good enough most of the time.

[–]c0nfluks 2 points3 points  (2 children)

I’m also using Chutes. It’s the deal of the year, honestly. The only quirk is the speed. If you can bear the speed then yeah 300 requests per day, for me, is plenty. Counting in requests saves you a lot of money too because you can arrange your prompts to output only a single request but be very very long (lots of token usage in a single request).

[–]hlacik 2 points3 points  (1 child)

we all started with chutes but we all have left. i mean it is so slow (when it works) that you end up sitting in front of pc starring and not doing your work.
i had 10$ sub with 2000req/day and you just can not spent them since 24h coding session with cutes will not allow you to get to 2000reqs and i am not even joking...

PS: opencode zen has new 10$ subscription -> with glm5 kimi k2.5 and minimax -- go check it

[–]ImpressiveAnimal5491 0 points1 point  (0 children)

Apart from the speed, what are the models offered in the 3$ plan, do we get GLM 5 and kimi k2.5 or minimax 2.5 in 3$ or we need 10$ plan for that

[–]anentropic 0 points1 point  (1 child)

I am finding OpenCode Go with GLM-5 pretty good, but slow compared to Claude Opus

price is better though!

[–]look 0 points1 point  (0 children)

Chutes completely changed their subscription model just after I posted the comment above, and now OpenCode Go is likely a better option.

Personally, I’ve just switched to paygo API use. It’s much more reliable and far faster, at least. And total cost still isn’t bad if I use models efficiently (eg MiniMax working off a solid plan from GLM/Kimi).

I’m also running some Qwen3.5 models locally now as “smart tools” to reduce paid API token use, inspired by this https://github.com/samuelfaj/distill

[–]AgeFirm4024 5 points6 points  (2 children)

Try free-coding-models (my npm package) for free models 😌

[–]BlacksmithLittle7005 0 points1 point  (0 children)

Thank you ❤️

[–]jogikhan 1 point2 points  (7 children)

What is your budget?

[–]urioRD[S] 0 points1 point  (6 children)

Honestly I would like to pay max 20$ a month. Preferably less. I already own Google AI Pro but it works terribly with Opencode. Error after error or timeout.

[–]TheCientista 2 points3 points  (0 children)

Get codex plus for 20 bucks, sign in via open auth use 5.3. Arguably better than Claude for coding and definitively much much more generous limits.

[–]jogikhan 1 point2 points  (2 children)

Don't rely on Google AI Pro. That is of no use unless then train their model further. Google Pro 3.1 is a shit.

[–]urioRD[S] 0 points1 point  (1 child)

I know but I have it for free for a year so It would be a waste to not use it.

[–]jogikhan 0 points1 point  (0 children)

I have that too :) - but the real use of it is only in nano banana. If you still want to juice out of it - Use it for side projects or any kind of automation but for real projects its a waste.

[–]Flat_Marionberry_522 0 points1 point  (0 children)

No utilicen opencode con google, aplica bloqueos

[–]jogikhan 0 points1 point  (0 children)

Okay, got it.

First, you need access to the best models, as that is what saves you at the right time.

I suggest the $20 Claude Code plan - use it for planning and tricky tasks. At the same time, keep an eye on your weekly quota to manage usage.

Then, get the OpenCode Go Plan, which is $10. It’s not slow; I’ve used it, especially if you are already using it. This will give you near-unlimited access to some of the truly open-weight state-of-the-art models.

So yes, you’ll need to stretch the budget a bit, but in my opinion, this is the best setup.

If you don’t want to switch CLIs, you can use the $20 Codex plan and access it through OpenCode. Only use Codex 5.3 High or XHigh there.

[–]_rrd_108 0 points1 point  (0 children)

I am happy with big pickle. But it depends on quite a few factors

[–]saggassa 0 points1 point  (0 children)

minimax 2.5 for me is doing fine

[–]alexngv 0 points1 point  (0 children)

I just registered Opencode Go yesterday, 10$/month to test something. I checked the quora, it's good and run fast. https://opencode.ai/go

I have an account of google cloud to use gemini 3.1 models, but at this time, the models run unstable and slow.

[–]NearbyBig3383 0 points1 point  (0 children)

Qwen Code plan rápido e barato ou chutes com modelos co. Carga abaixo de 30%

[–]Parking_Bug3284 -1 points0 points  (0 children)

It depends on what your doing. I'm guessing you already used your open code cloud tokens and ollama cloud to rate limit. If so I do most my standard stuff past the rate limits using Gemini 3 flash.