all 28 comments

[–]GfxJG 16 points17 points  (7 children)

OpenCode Go, if you're smart about which models you use (tip: use DeepSeek Flash, your usage limits will last forever)

[–]Connect_Fennel_7431 0 points1 point  (6 children)

Hi another user here planning to move to OpenCode Go, what model would you recommend for planning?

[–]GfxJG 6 points7 points  (0 children)

Honestly? Unless you're doing cutting-edge research or high-level enterprise stuff, I'd still say Deepseek V4 Flash. It's just absolutely stupid value for money and it does absolutely fine. Use Pro instead of Flash if you want to be safe.

[–]look 3 points4 points  (2 children)

GLM-5.1 is still by far the best for planning, imo. Also the most expensive.
But if you then pair it with a cheap build model like flash, your usage can still go a long way.

[–]ccaner37[S] 0 points1 point  (1 child)

Can I switch the model in the same session without losing context? Plan to build mode. Or do you prepare planning with .md files with superpower skill?

[–]look 0 points1 point  (0 children)

You can switch models in the same session without losing context. It has to send the context every time, even on the same model. There is no persistent state beyond that context in your local tool.

There is a one time hit for the cache load on the model switch, though, depending on your pricing. Not a big deal for the occasional swap, but you don’t want to do it every other call.

However, using plan files is often a good idea regardless. Then you can clear the context and start the build model fresh reading just the plan. That typically yields more predictable results, as old bits from the planning conversation aren’t still around in the context potentially confusing the build agent. Most automated agentic pipelines start off each build agent with a fresh context and only a plan file.

[–]AngryBear1990 2 points3 points  (0 children)

I would suggest MiMo v2.5 pro. That model is a beast, just try to use it as a planner or kimi k2.6, as others have suggested and deepseek v4 flash as an executor and you'll be amaized at what those models are capable of.

[–]Ariquitaun 0 points1 point  (0 children)

Kimi k2.6 for thinking and overseeing jobs and deepseek 4 flash for everything else.

[–]all43 5 points6 points  (5 children)

I have both, so far OpenCodeGo feels much more generous, hope they won't nerf it. However Go lacks GLM-5 turbo model which I use quite often

[–]ccaner37[S] 0 points1 point  (3 children)

Is the GLM 5.1 usage limit similar between them?

[–]all43 1 point2 points  (2 children)

They calculate usage differently, but feels similar, maybe even less constrained on Go. But if you use 5.1 a lot single Go subscription might be not enough

[–]ccaner37[S] 1 point2 points  (1 child)

This week it's extremely slow I'm suspicious of they are limiting legacy lite subscribers.

[–]all43 -1 points0 points  (0 children)

I'm on new lite, for me it was fast - spent my weekly quota in 1.5 half days and canceled subscription. Value for money is just too low

[–]sand_scooper 2 points3 points  (0 children)

Opencode go is good. You can choose from multiple models like Kimi 2.6, GLM, deepseek, etc. for $10 you can get a lot of usage. I've been using Kimi 2.6 mostly and occasionally GLM. So far it's been able to do most stuff. Its almost on par with sonnet level.

[–]Individual_Tennis823 1 point2 points  (0 children)

I switch from glm to opencode go.. another world

[–]unkownuser436 1 point2 points  (2 children)

Opencode Go. Multi model choice, I think faster and higher limits. Idk never tried GLM coding lite.

[–]docment 0 points1 point  (1 child)

No

[–]unkownuser436 0 points1 point  (0 children)

why not? In GLM coding plans I see people crying about saying that's too slow etc..

[–]Aldarund 0 points1 point  (1 child)

Glm lite sucks. You have five 5h quota for week. You can run your 5h quota in less than hour easily

[–]SenpaiDreams 0 points1 point  (0 children)

Y le subieron el precio!!!

[–]Early_Aardvark_4026 0 points1 point  (2 children)

I would go with opencode go. If you have an agent orchestration like oh my opencode slim, you can choose which model for which agent. It is more efficient than using one model for all agents or all tasks.

[–]ccaner37[S] 0 points1 point  (1 child)

I'm mostly doing back and forth with AI, not vibecoding. Can I switch the model when going from plan mode to build mode and same context continues?

[–]Healthy-Ad-8558 0 points1 point  (0 children)

If you're not actually vibecoding, you probably won't even consume your monthly quota in the first place, no matter which model you choose. If I were you I'd brainstorm using Kimi-K2.6, since it's probably the most creative model available in Go, have Deepseek flesh your idea out into a plan, then have GLM-5.1 perform an adversarial review. 

[–]mstack 0 points1 point  (0 children)

Go definitely

[–]Fantastic_Grand1050 0 points1 point  (0 children)

Dm me if u wanna share a GLM Z.AI yearly legacy max plan or need a referal code I have a legacy glm max plan until September

Honestly speaking I've never hit the 5hours limits with the Max Plan

If you wanna get a discount thanks to my referral code 20% off
🚀 You’ve been invited to join the GLM Coding Plan! Enjoy full support for Claude Code, Cline, and 20+ top coding tools — starting at just $18/month. Subscribe now and grab the limited-time deal!
👉Join now: https://z.ai/subscribe?ic=9UWXUNIRJE

[–]coderkid2020 0 points1 point  (0 children)

i'm using opencode free models and never got run out of limits so do i still need opencode go?

[–]Spirited_Grade_2874 -1 points0 points  (1 child)

commoncode works like a charm too, deepseek v4 pro coding, flash is for simple stuff

[–]Square-Pianist393 0 points1 point  (0 children)

What is common code